Search Results: "seanius"

3 April 2011

Cyril Brulebois: Debian XSF News #8

This is the eighth Debian XSF News issue. For a change, I m going to use a numbered list, which should help telling people which item to look for when pointing to a given URL. Feel free to let me know if that seems like a nice idea or whether that hurts readability. Also, it was prepared several days ago already, so I m publishing it (with the needed bits of polishing it still needed) without mentioning what happened in the last few days (see you in the next DXN issue!).
  1. Let s start with a few common bugs reported over the past few weeks:
    • The server can crash due to some X Font Server (XFS) issue as reported upstream in FDO#31501 or in Debian as #616578. The easy fix is to get rid of FontPath in xorg.conf, or to remove the xfs package. It s deprecated anyway.
    • Xdm used to crash when started from init, but not afterwards (#617208). Not exactly fun to reproduce, but with the help of a VM, bisecting libxt to find the guilty commit was quite easy. After a quick upload with this commit reverted, a real fix was pushed upstream; a new upstream was released, packaged, and uploaded right after that.
    • We ve had several reports of flickering screens, which are actually due to upowerd s polling every 30 seconds: #613745.
    • Many bug reports were filed due to a regression on the kernel side for the 6.0.1 squeeze point release, leading to cursor issues with Intel graphics: #618665.
  2. Receiving several similar reports reminded me of the CurrentProblemsInUnstable page on the wiki, which is long unmaintained (and that s why I m not linking to it). I m not exactly sure what to do at this point, but I think having a similar page on http://pkg-xorg.alioth.debian.org/, linked from the how to report bugs page would make sense. Common issues as well as their solutions or workarounds for stable should probably go to the FAQ instead.
  3. As explained in DXN#7, we re waiting for the kernel to migrate to wheezy. The 2.6.38 upstream release was quickly pushed to unstable, which is great news, even if it s not really ready yet (since it s still failing to build on armel and mips).
  4. I ve been using markdown for our documentation, basically since it looked sufficient for our needs, and since I ve been using it to blog for years now, but it had some limitations. I ve been hearing a lot of nice things about asciidoc for a while (hi, Corsac!), so I gave it a quick shot. Being quite happy with it, I converted our documentation to asciidoc, which at the bare minimum buys us a nice CSS (at least nicer than the one I wrote ), and with automatic table of contents if we ask for it, which should help navigating to the appropriate place. A few drawbacks:
    • The syntax (or the parser s behaviour) changed a lot since lenny s version, so updating the online documentation broke badly. Thanks to the nice Alioth admins, the version from lenny-backports was quickly installed and the website should look fine.
    • The automatic table of contents is generated through JavaScript, which doesn t play nicely with wkhtmltopdf (WebKit-based HTML to PDF converter), since the table of contents gets pixelated in the generated PDF documents. We could use a2x to generate documents through the DocBook way, but that means dealing with XSL stylesheets as far as I can tell; that looks time-consuming and a rather low-priority task. But of course, contributions are welcome.
  5. When I fixed missing XSecurity (#599657) for squeeze, I didn t notice the 1.9 packages were forked right before that, so were affected too. I fixed it in sid since then (and in git for experimental). I noticed that when Ian reported a crash with large timeouts in xauth calls, which I couldn t reproduce since untrusted cookies without XSecurity don t trigger this issue. I reported that upstream as FDO#35066, which got marked as a duplicate of (currently restricted) FDO#27134. My patch is currently still waiting for a review.
  6. Let s mention upcoming updates, prepared in git, but not uploaded yet:
    • mesa 7.10.1, prepared by Chris (RAOF); will probably be uploaded to experimental, unless 7.10 migrates to testing first, in which case that update will target unstable.
    • Intel driver: Lintian s been complaining about the .so symlinks for a while, and I finally gave it a quick look. It seems one is supposed to put e.g. libI810XvMC.so.1 in /etc/X11/XvMCConfig to use that library, so the symlinks are indeed not needed at all, and I removed them.
    • Tias Guns and Timo Aaltonen introduced xinput-calibrator in a git repository; that s a generic touchscreen calibration tool.
  7. Here come the updated packages, with uploader between square brackets (JVdG = Julien Viard de Galbert, Sean = Sean Finney). For the next issue, I ll try to link to the relevant entries in the Package Tracking System.
    • [KiBi] libxt: to unstable, as mentioned above, with a hot fix, then with a real fix.
    • [KiBi] synaptics input driver: to unstable and experimental, fixing the FTBFS on GNU/kFreeBSD.
    • [KiBi] xterm: new upstream, to unstable.
    • [KiBi] libdrm: new upstream, to experimental. A few patches to hide private symbols were sent upstream, but I ve seen no reactions yet (and that apparently happened in the past already).
    • [KiBi] xorg-server 1.9.5rc1 then 1.9.5, to unstable.
    • [KiBi] xutils-dev to unstable: the bootstrap issue goes away, thanks to Steve s report.
    • [KiBi] libxp to unstable, nothing fancy, that s libxp
    • [KiBi] keyboard input driver: mostly documentation update, to unstable and experimental.
    • [KiBi] mouse input driver: fixes BSD issues, to unstable and experimental.
    • [KiBi] intel video driver: to experimental, but the debian-unstable branch can be used to build the driver against unstable s server.
    • [KiBi] xfixes: protocol to unstable, and library to experimental (just in case); this brings support for pointer barriers.
    • [JVdG] openchrome video driver: Julien introduced a debugging package, and got rid of the (old!) via transitional package. He also performed his first upload as a Debian Maintainer. Yay!
    • [KiBi] siliconmotion video driver: to unstable.
    • [KiBi] pixman: new upstream release candidate, to experimental
    • [Sean] last but not least: many compiz packages to experimental.

9 March 2011

Sean Finney: compiz updates!

For those who are interested in the world of compiz, new development snapshots have been trickling their way into experimental for the past week and are now available for widespread testing. They're being sent to experimental instead of straight to unstable for a few reasons: So, anyway, the packages are all uploaded now--give them a try!

24 February 2011

Sean Finney: git merge -s theirs

...For the cases where, yes, you actually do want it. For the impatient assuming you're in a clean checkout on the tip of your current branch:
git merge -s ours ref-to-be-merged
git diff ref-to-be-merged   git apply -R --index
git commit -F .git/COMMIT_EDITMSG --amend
you can then double check that everything is as you expect with git diff ref-to-be-merged, which should be empty. The longer story It's noted from time to time that git does not have a -s theirs option to complement the -s ours strategy when merging (or, it did, but it was removed long ago, see below). For those not savvy the idea with git merge -s ours is to pretend as though a merge from some other branch has occurred, though the result remains the current branch's contents, unchanged. This is a neat trick that can be used to show that all changes in some branch have been superceded by the current branch, which can help ease merges further down the line as parallel devlopment continues. However, there is no corresponding -s theirs option, which would conversely say that "everything developed up to here is superceded by this other branch". This feature was discussed and discarded by one of the git authors, who instead advised a developer to throw away the previous work and start on a fresh branch:
One big problem "-s theirs" has, compared to the above "reset to origin, discarding or setting aside the failed history" is that your 'master' history that your further development is based on will keep your failed crap in it forever if you did "-s theirs". Hopefully you will become a better programmer over time, and you may eventually have something worth sharing with the world near the tip of your master branch. When that happens, however, you cannot offer your master branch to be pulled by the upstream, as the wider world will not be interested in your earlier mistakes at all.
In what would probably qualify as "most cases", he's right, but sometimes you do want to do this, and not because you have "crap" in your history, but perhaps because you want to change the baseline for development in a public repository where rebasing should be avoided. In debian, for example, it's pretty common for maintainers who use git-buildpackage to build debian packages from git to have two branches: an "upstream" branch and a "debianized" branch. The upstream branch will follow the changes from the upstream project (either by importing tarballs, or by pulling directly from the project's git repo if they also use git), while the debianized branch contains all the packaging related changes as well as any debian-specific changes to the code. In this context, if the upstream project uses git and rebases their history (out of your control), or if they shift development to a different branch, you might want to have a way to merge with their latest changes without rebasing/abandoning your current debian branch. Hence, a "theirs" merge. The author is totally right that the upstream project may be less interested in pulling from such a repository (since they would drag in the entire previous history), and if that's a problem you might need to splinter off the old branch and start a fresh branch instead. But at least for the projects with which I work, I find that it's more likely that pulling is pretty much one way upstream to debian, and that the patches flow back to the upstream project by way of bug trackers, git format-patch sendmail, or quilt patches sitting in the patch-tracker. And if it becomes a problem later you're no worse off, you can still split off a new branch or take some other action. So anyway, I asked Teh Internetz and found a few hits in the right direction, such as this guy at stack overflow, or this guy here. But from an aesthetic point of view I thought their solutions still left some room for improvement--they were both more complicated, involved setting up temporary branches, and I don't know, just weren't pretty on the eyes. So, ending where I started, this is the relatively clean and easy to follow 3 lines I conjured up, which I will give to teh internetz for posterity.
git merge -s ours ref-to-be-merged
git diff ref-to-be-merged   git apply -R --index
git commit -F .git/COMMIT_EDITMSG --amend
Alternatively, if you want to keep the local upstream branches fast-forwardable, a potential compromise is to work with the understanding that for sid/unstable, the upstream branch can from time to time be reset/rebased (based on events that are ultimately out of your control on the upstream project's side). This isn't a big deal and working with that assumption means that it's easy to keep the local upstream branch in a state where it only takes fast-forward updates. However, on the debian branch your less interested in the clean upstream development, and instead want to keep a sane history of the debianization work, so on this branch you still do something like a merge -s theirs (though in this case, you probably want to amend the previous version's ./debian dir back in post-merge). So applying the same approach as above with the slight modification, in practice (assuming you're on the clean tip of the debian unstable branch, that'd be):
git branch -m upstream-unstable upstream-unstable-save
git branch upstream-unstable upstream-remote/master
git merge -s ours upstream-unstable
git diff ref-to-be-merged   git apply -R --index --exclude="debian/*"
git commit -F .git/COMMIT_EDITMSG --amend
PS In spirit, this is a similar approach to what's done by the XSF team, though I would argue that with what I'm describing both the approach and the resulting history are a bit cleaner and easier to follow. Also, thanks to ron, mrvn, jcristau, and nthykier on #debian-devel for some quick reviewing of this.

21 February 2011

Sean Finney: Recent debian updates

Packaging updates Help wanted: cacti I've also filed an RFA bug for cacti (and spine), as i'm not really using cacti any more in my day to day work. The packages are in pretty good shape, but it would really be for the best if someone who actually used cacti were able to more thoroughly test new versions before they were uploaded. I'm also open to co-maintaining the package and/or sponsoring uploads; so if you're already using cacti and looking to get involved in debian development, this could be a fun place to start :) Debian patch tracker Finally got around to fixing a couple post-release (and older) issues in the patch-tracker. Hopefully it should be back to updating and following the latest stuff in the archive. Next up I'll see if I can find someone from DSA to upgrade the hosting machine to squeeze and install a few extra dependencies for the next feature update, which contains a pretty substantial rewrite of the guts from some home-rolled wsgi+cheetah goo to a more established and maintainable django-based system. In the meantime, please let me know if you see any problems (via maintenance cat or otherwise), and I will take a look.

4 June 2010

Colin Watson: Hacking on grub2

Various people observed in a long thread on debian-devel that the grub2 package was in a bit of a mess in terms of its release-critical bug count, and Jordi and Stefano both got in touch with me directly to gently point out that I probably ought to be doing something about it as one of the co-maintainers. Actually, I don't think grub2 was in quite as bad a state as its 18 RC bugs suggested. Of course every boot loader failure is critical to the person affected by it, not to mention that GRUB 2 offers more complex functionality than any other boot loader (e.g. LVM and RAID), and so it tends to accumulate RC bugs at rather a high rate. That said, we'd been neglecting its bug list for some time; Robert and Felix have both been taking some time off, Jordi mostly only cared about PowerPC and can't do that any more due to hardware failure, and I hadn't been able to pick up the slack. Most of my projects at work for the next while involve GRUB in one way or another, so I decided it was a perfectly reasonable use of work time to do something about this; I was going to need fully up-to-date snapshots anyway, and practically all the Debian grub2 bugs affect Ubuntu too. Thus, with the exception of some other little things like releasing the first Maverick alpha, I've spent pretty much the last week and a half solidly trying to get the grub2 package back into shape, with four uploads so far. The RC issues that remain are: If we can fix that lot, or even just the ones that are reasonably well-understood, I think we'll be in reasonable shape. I'd also like to make grub-mkconfig a bit more robust in the event that the root filesystem isn't one that GRUB understands (#561855, #562672), and I'd quite like to write some more documentation. On the upside, progress has been good. We have multiple terminal support thanks to a new upstream snapshot (#506707), update-grub runs much faster (#508834, #574088), we have DM-RAID support with a following wind (#579919), the new scheme with symlinks under /dev/mapper/ works (#550704), we have basic support for btrfs / as long as you have something GRUB understands properly on /boot (#540786), we have full info documentation covering all the user-adjustable settings in /etc/default/grub, and a host of other smaller fixes. I'm hoping we can keep this up. If you'd like to help, contact me, especially if there's something particular that isn't being handled that you think you could work on. GRUB 2 is actually quite a pleasant codebase to work on once you get used to its layout; it's certainly much easier to fix bugs in than GRUB Legacy ever was, as far as I'm concerned. Thanks to tools like grub-probe and grub-fstest, it's very often possible to fix problems without needing to reboot for anything other than a final sanity check (although KVM certainly helps), and you can often debug very substantial bits of the boot loader - the bits that actually go wrong - using standard tools such as strace and gdb. Upstream is helpful and I've been able to get many of the problems above fixed directly there. If you have a sound knowledge of C and a decent level of understanding of the environment a boot loader needs to operate in - or for that matter specialist knowledge of interesting device types - then you should be able to find something to do.

1 May 2010

Sean Finney: Nouveau, Gallium3D, and Compiz on Debian

update: 20100503: It's been brought to my attention that there might have been a regression in one of the libraries, mesa, maybe, so that master might not be as reliable a reference point for this guide. so, just in case, here are the commit hashes that were used from the different projects: for the non-git-savvy: git checkout -b mybranch <hash> we now return to your regular broadcasting... A week or two ago I figured it was about time that I took the plung and tried out the latest nouveau drivers from experimental on my laptop. Unfortunately, it meant bidding adieu to compiz (apart from the bling it does actually provide some pretty damned useful features). I thought I'd give it a shot at least for a while on principal, since I've always had to think of the nvidia drivers as a necessary evil, and their lack of proper xrandr support is really, really annoying. Since then I haven't noticed any problems, apart from the following message after every suspend/resume:
[  931.213239] Uhhuh. NMI received for unknown reason 80 on CPU 0.
[  931.213240] You have some hardware problem, likely on the PCI bus.
[  931.213242] Dazed and confused, but trying to continue

No idea what's causing it, but otherwise the system is stable and functional. If anyone knows what can be done about this, I'd appreciate hearing from you. My graphics card, by the way:
01:00.0 VGA compatible controller: nVidia Corporation G72M [Quadro NVS 110M/GeForce Go 7300] (rev a1)

In the time since then, the nouveau driver has made its way into unstable, and a lot of distros (ubuntu, fedora, ...) are even using it as the default driver. So before going any further with the 3d/compiz stuff, I highly recommend that you get the packaged version working first. This will not only make sure that the basic driver works, but it will also make recovery/rollback a lot easier. Since it's all packaged in the standard repositories, it's not much harder than installing a few packages and editing a few files... so come on give it a try :) Getting the 2d driver working in Debian sid Really, this is so easy it's hardly worth writing up, but i'm running with the -v flag this morning... Make sure you don't have any reference to nvidia in /etc/modules or other places that might cause it to be automatically loaded. Then remove the proprietary glx bindings, which would otherwise interfere with the nouveau driver.
sudo apt-get remove nvidia-glx

Next, you'll need to install a kernel that includes updated drm support. Normally this would require a kernel >= 2.6.33.1, but the debian kernel maintainers have graciously backported this code to the 2.6.32-4 kernel package. note this isn't the default kernel version in testing/unstable at the time of writing so you have to explicitly install it along with the nouveau driver.
sudo apt-get install linux-image-2.6.32-4-amd64 xserver-xorg-video-nouveau

If you are already running this kernel and using the proprietary nvidia drivers, via module-assistant or otherwise, you /might/ need to uninstall them first. On my system I left them installed in the old kernel version but did not install them on the new kernel version. Required xorg configuration I don't believe that nouveau is picked by default if available at the time of writing, so you have to have the following in your xorg.conf:
sudo sh -c "cat > /etc/X11/xorg.conf" << EOF
Section "Device"
    Identifier     "nVidia Corporation G72M [Quadro NVS 110M/GeForce Go 7300]"
    Driver         "nouveau"
EndSection
EOF

This was all I needed to do, apart from rebooting into the new kernel. For those who don't want to venture outside the packaging system, this is about as far as you can go for the time being. Though really, if you're bothering to read this far along, you're probably more interested in what comes next :) Getting 3d working (without voiding your distro warranty) So I thought I'd give the 3d-accelerated gallium a go. Their wiki page certainly makes one think twice, but at this point I was seriously fiending to get my bling back (well, the compiz scale plugin, anyway) and figured it wouldn't hurt too much to try. Via google and trawling around in various forums I found a number of how-to documents for getting Gallium3d on debian/ubuntu systems, but most of them recommended doing things in a way that wasn't easily reversible and that could totally hose your system in a not-so-hypothetical worst-case scenario. It's pretty hard to roll back a make install done into /usr, for example, and subsequent package upgrades can leave your system even worse off as they will probably clobber half of what you just did, leaving the other half totally broken. And if you have a laptop which requires the gui for networking (i.e. network-manager), then fixing/rolling-back the changes is not entirely trivial, since you'd likely have to re-download the various drm/mesa/xorg packages. So enter the tried and true unix utility stow (apt-get install stow, of course). stow is a utility that lets you install software into arbitrary folders, and then manage the system-wide installation of the software via a farm of symlinks pointing at the installed location. It's probably more well known among the sysadmin crowd than the desktop user crowd, but it's incredibly useful for the task at hand as it will let us easily install multiple copies of different software in a way that's very easy to update and/or roll-back. RTFM stow(1) for more information. So, on to the juicy stuff... Install needed software, remove problematic packages, setup, etc At least on my system, I had to remove libdrm-dev to keep things from accidentally linking against it instead of the updated version that we install later on below. There's probably a way to override this with CFLAGS but I figure this is simpler.
sudo apt-get remove libdrm-dev
sudo apt-get install stow build-essential xorg-dev git-core libtool mesa-common-dev automake autoconf 
mkdir ~/nouveau

Compile a linux-2.6.34 RC kernel
cd ~/nouveau
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
cd linux-2.6
git remote add -f nouveau git://anongit.freedesktop.org/nouveau/linux-2.6
git checkout -b nouveau-master nouveau/master
make menuconfig

Enable device drivers -> staging drivers -> nouveau. Make any other changes you might want to make (I just took defaults for everything else).
make deb-pkg

The linux-firmware-free debian package ships some of the same stuff from the generated linux-firmware-image package. I decided to opt for the upstream firmware, though I'm not sure that this is necessary:
sudo apt-get remove linux-firmware-free
sudo dpkg -i ~/nouveau/linux-image*.deb ~/nouveau/linux-firmware-image*.deb
sudo update-grub

Install a copy of dri2proto
cd ~/nouveau
git clone git://anongit.freedesktop.org/xorg/proto/dri2proto
cd dri2proto
./autogen.sh --prefix=/usr/local/stow/dri2proto-20100428
sudo mkdir -p /usr/local/stow/dri2proto-20100428
sudo make install
sudo stow -d /usr/local/stow dri2proto-20100428

Install a cutting-edge version of libdrm
cd ~/nouveau
git clone git://anongit.freedesktop.org/git/mesa/drm/
cd drm
./autogen.sh --prefix=/usr/local/stow/drm-20100428 --disable-intel --disable-radeon --enable-nouveau-experimental-api
make
sudo mkdir -p /usr/local/stow/drm-20100428
sudo make install
sudo stow  -d /usr/local/stow  drm-20100428

Likewise for mesa
cd ~/nouveau
git clone git://anongit.freedesktop.org/git/mesa/mesa
cd mesa
./autogen.sh --prefix=/usr/local/stow/mesa-20100428  --enable-debug --enable-glx-tls --disable-asm --with-dri-drivers= --enable-gallium-nouveau --disable-gallium-intel --disable-gallium-radeon --disable-gallium-svga --with-state-trackers=glx,dri --with-demos=xdemos,demos,trivial,tests
make
sudo mkdir -p /usr/local/stow/mesa-20100428 
sudo make install
sudo stow -d /usr/local/stow mesa-20100428

And now the xorg driver...
cd ~/nouveau
git clone git://anongit.freedesktop.org/git/nouveau/xf86-video-nouveau/
cd xf86-video-nouveau
./autogen.sh --prefix=/usr/local/stow/xf86-video-nouveau-20100428
make
sudo mkdir -p /usr/local/stow/xf86-video-nouveau-20100428
sudo make install
sudo stow -d /usr/local/stow xf86-video-nouveau-20100428

Configuration Configure xorg to look in alternate location for xorg modules and set up DRI
sudo sh -c "cat > /etc/X11/xorg.conf" << EOF
Section "Files"
    ModulePath "/usr/local/lib/xorg/modules,/usr/lib/xorg/modules"
EndSection
Section "ServerFlags"
    Option "GlxVisuals" "all"
Endsection
Section "Device"
    Identifier     "nVidia Corporation G72M [Quadro NVS 110M/GeForce Go 7300]"
    Driver         "nouveau"
EndSection
EOF

I've /heard/ that disabling TV out is a necessity on some cards, though I haven't seen this myself:
sudo sh -c "cat > /etc/modprobe.d/nouveau.conf" << EOF
# not entirely sure this is necessary, but heard it helped with vsync
options nouveau tv_disable=1
EOF

Finally, the one thing that can not be done in /usr/local is making the nouveau DRI driver available. From what I could tell the path that mesa uses to find the DRI libraries is hard-coded (corrections welcome). Since you won't already have this file (otherwise, why would you be following along here?), just drop in a symlink to the file in /usr/local:
sudo ln -s /usr/local/lib/dri/nouveau_dri.so /usr/lib/dri/

Finally, reboot, and hope for the best :) Riding the bleeding edge: how to upgrade later on Again, stow really shows its usefullness here. Using drm as an example case, upgrading an individual component would look something like:
cd ~/nouveau/drm
git clean -dfx
git pull
today= date +%Y%m%d 
./autogen.sh --prefix=/usr/local/stow/drm-$today --disable-intel --disable-radeon --enable-nouveau-experimental-api
make
sudo mkdir -p /usr/local/stow/drm-$today
sudo make install
sudo stow -d /usr/local/stow -D drm-20100428
sudo stow -d /usr/local/stow  drm-$today

Note how the old version is still installed, so you can switch components around pretty freely. Abort! Abort!: How to roll back changes In case your system becomes totally unusable, or you've decided that you just want to go back to the safe and cozy world of what's provided by the debian packages, simply do the following: Assuming that you're using the nouveau driver from unstable, this should be all that is necessary. Conclusion Overall impression I am very, very happy with the resulting setup. I was led to believe that this whole thing was much more unstable and unusable than it actually is! While there are certainly some glitches and missing features, I feel comfortable enough using it that I don't have plans to switch back. But if I find myself in a situation where I can't use the new setup, rolling back shouldn't be a problem either. I figure I'm happy with what I have now and it will only get better. What works, at least to a "kinda works" level. What still doesn't work Further reading, references Thanks A couple shoutouts to those who helped me with this on the way: Feedback / Corrections Unforunately, I've had to disable comments for the time being due to overwhelming quantities of spam. So please drop me a line via email or on IRC (seanius on oftc/freenode).

6 October 2009

Sean Finney: cat > /dev/console

ur CATSLOCK is on
this computer is in use and can not be unlocked

19 September 2009

Sean Finney: dpkg triggers, the lost how-to document

this weekend while giving a bit of TLC to the php packages, i thought i'd finally get around to tackling #490023 and similar bugs, which meant learning about how to use the triggers feature provided by dpkg. unfortunately documentation for this feature was in fairly short supply, and after extensive searching (and by extensive searching i mean typing "dpkg triggers howto" into a google search) i had a couple short manpages (dpkg-trigger(1) and deb-triggers(5)) and an overly verbose and possibly out of date /usr/share/doc/dpkg/triggers.txt.gz. none of these documents really gave a clear "big-picture" for how to get going either. so, as is often the case, i got distracted from my intended task and ended up on a little side-project putting together a nice howto/tutorial for how to integrate dpkg triggers into a package. note that there may be misunderstandings or inaccuracies in the document, as it's based on an afternoon's worth of hacking and q&a directed at #debian-dpkg (thanks to Guillem and Rapha l for fielding those questions). so if i'm off on anything, please feel free to let me know! to get started:
git clone http://git.debian.org/git/users/seanius/dpkg-triggers-example.git

and for those who are too lazy even for that, the README follows: The debian package triggertest This package is a dead-simple "triggers in a nutshell". It shows a few different ways that triggers can be used in practice, to hopefully cover most of the general use cases for triggers. The purpose of triggers Triggers are used to ensure that during the standard package-management process certain operations always run, but not more than necessary. For example: How triggers work in a nutshell Trigger-using packages can be classified in two behavioral categories: When a consumer is triggered, its postinst script is run with the arguments:
postinst triggered "<trigger1> ... <triggerN>"

i.e. $2 contains an iterable list of activated triggers. Note that if a consumer is going to be normally configured (i.e. it is also being updated), then no triggering may occur and thus the standard control flow of the maintainer scripts should still take care to handle this. A "trigger" is declared for the consumer by shipping a file in DEBIAN/triggers (in the case of debhelper based packages, ./debian/consumer-package.triggers will JDTRT). Useful RTFM: deb-triggers(5). Example:
interest /path/to/a/directory
interest my-second-trigger

This declares that the package in question is interested in two triggers. The first trigger has a leading path separator in its name, which instructs dpkg that any filesystem modifications underneath this directory by other packages should cause the respective consumer(s) to be triggered. The second name is completely arbitrary (and also global, so you should probably pick something reasonably specific and identifiable), and will only cause a trigger to be activated when a producer
  1. ships its own triggers file with a corresponding "activate my-second-trigger", or
  2. invokes the trigger explicitly in a maintainer script by calling "dpkg-trigger my-second-trigger".
Another perhaps useful RTFM: dpkg-trigger(1). Examples contained in this source package This source package produces a single "consumer" package and a number of different "producer" packages. The consumer is triggered in various ways, such as: In this case the triggers correspond to an "update-foo" and "update-bar" command, which do nothing other than echo to the console that they are being run. In practice they'd correspond to updating some kind of database or restarting a daemon. More information For more information about triggers, see /usr/share/doc/dpkg/triggers.txt.gz, which will make a lot more sense having a working example like this package as reference. Especially useful is "Transition hints for existing packages", which can be used if you need to gracefully handle migrating in support for triggers across other packages not directly under your control. Corrections, suggestions, etc Are appreciated and should be sent to the author, below. -- Sean Finney seanius@debian.org Sat, 19 Sep 2009 15:57:48 +0200

5 September 2009

Sean Finney: the debian patch tracker has a new home

thanks largely to Peter Palfrader and Bernd Zeimetz, the debian patch-tracker is back alive and now hosted as an official debian.org service: http://patch-tracker.debian.org. in addition to the new host, i've made a few very basic optimizations that will hopefully reduce the strain when someone browses through some of the larger patch sets (ehem, openoffice.org, i'm looking at you). if you notice any problems with the service, please drop me a line.

14 May 2009

Sean Finney: automatic commit log messages with git

continuing on the trend of finding novel ways to make life more interesting with git, here's the description from a new hook:
# prepare-commit-msg hook for debian package git repositories
#
# this script scans the diff that is going into a commit, and automatically
# injects some "proposed" comments based on what it finds in the diff.  this
# can be used to avoid a few extra keystrokes when performing some of the
# more standard/boring tasks.

(see below for how to fetch the script) this one definitely falls in the "carrot" category, as it encourages properly isolated (and thus automatically identifiable) changes. the proposed comments are then given to the standard editor, so one can easily amend them, append "Closes:" lines, etc. some sample use-cases currently implemented: as always, comments/feedback/suggestions/etc welcome :) using this new hook note this is the same repo as the previous hooks i've blogged about, so if you already have that set up you can skip the clone and instead just pull in the changes. also note that this is in your local repo, not the remote one. to set it up:
REPO_PATH=/path/to/your/repo.git
HOOK_REPO_PATH=/somewhere/you/want/to/put/it
git clone git://git.debian.org/users/seanius/vcs-hooks/git-hooks.git $HOOK_REPO_PATH
ln -sf $HOOK_REPO_PATH/debian/git-hooks/prepare-commit-msg-guess-message.py $REPO_PATH/.git/hooks/prepare-commit-msg

there aren't currently any configurable options in this hook.

26 April 2009

Sean Finney: new git hook for debian packaging: pre-recieve fileset fascism

previously i threw together a small hook to make it a bit easier to avoid duplicate work while maintaining packages, as well as easily keep the BTS up to date with relevant information. now i've commonized the code just a bit and have a second hook which can be used to maintain certian (what i believe to be) good practices for keeping packages in git. admittedly, it's a bit more "stick" than "carrot" with respect to streamlining workflows, but i feel the justifications and the resulting benefits are worth it. so the hook does basically two things, either of which can be customized and/or disabled. prevent "non-debian" changes on a "debian" branch assuming that there are seperate branches for "debian" packaging and for "upstream" development, this hook prevents upstream-style changes on the debian branches. that is to say changes to files outside ./debian are not permitted on a "debian" branch, unless of course you're merging from an upstream branch. instead, changes to the source in a debian branch should be managed by quilt-style "feature patches", or a more advanced branch topology using some kind of feature branches (topgit or similar). prevent changelog modifications from being mixed in with other commits this one might be a bit more controversial for some. basically, the idea is that: therefore, this hook "declines" commits which modify debian/changelog, unless it is the only file being changed. using this new hook note this is the same repo as the other hook, so if you already have that set up you can skip the clone and instead just pull in the changes. to set it up:
REPO_PATH=/path/to/your/repo.git
HOOK_REPO_PATH=/somewhere/you/want/to/put/it
git clone git://git.debian.org/users/seanius/vcs-hooks/git-hooks.git $HOOK_REPO_PATH
ln -sf $HOOK_REPO_PATH/debian/git-hooks/pre-receive-fileset-fascism.py $REPO_PATH/hooks/pre-receive

the config options for controlling this hook (these are git config options just like the other hook):
# hooks.debianbranches (default: 'debian-.*')
#    a regular expression which indicates which branches are "debian" branches.
#    in the context of this hook such branches are not allowed to have changes
#    in files outside of the ./debian directory.
# hooks.sacredchangelog (default: True)
#    if set to True, debian/changelog can not be changed in a commit that
#    also modifies other files.  this helps ensure changes that are easily
#    merged/cherry-picked/reverted.

examples of the hook in action rejecting a commit that has an entangled changelog:
rangda[~/debian/php] git push                                  :)
Counting objects: 9, done.
Delta compression using 2 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 442 bytes, done.
Total 5 (delta 4), reused 0 (delta 0)
fileset-fascism rejecting commit 2fab3269b3d9daedd7013a576483a11ccb4cb86a
    debian/changelog must be changed seperately from other files.
    changed files in this commit:
        debian/changelog
        debian/control
error: hooks/pre-receive exited with error code 1
To ssh://git.debian.org/git/pkg-php/php.git
 ! [remote rejected] debian-sid -> debian-sid (pre-receive hook declined)
error: failed to push some refs to 'ssh://git.debian.org/git/pkg-php/php.git'

rejecting a commit that has non ./debian changes:
rangda[~/debian/php] git push                                  :)
Counting objects: 5, done.
Delta compression using 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 309 bytes, done.
Total 3 (delta 2), reused 0 (delta 0)
fileset-fascism rejecting commit d96669c0780e33b250af81404e263fc181fe0547
    non ./debian changes on a debian branch in this commit.
error: hooks/pre-receive exited with error code 1
To ssh://git.debian.org/git/pkg-php/php.git
 ! [remote rejected] debian-sid -> debian-sid (pre-receive hook declined)
error: failed to push some refs to 'ssh://git.debian.org/git/pkg-php/php.git'

as always, comments/feedback/suggestions/etc welcome :)

23 April 2009

Sean Finney: embedding youtube videos in ikiwiki

so i had a couple videos to mention and i figured i'd use the chance to write an ikiwiki template to do so. it's not the most flexible, but it works! current template code, put into templates/youtube.mdwn:
<div class="youtube">
 <object width="445" height="364" 
     data="http://www.youtube-nocookie.com/v/<TMPL_VAR raw_v>&fs=1&border=1" >
  <param name="allowFullScreen" value="true" />
  <param name="allowscriptaccess" value="always" />
  <param name="movie" 
     value="http://www.youtube-nocookie.com/v/<TMPL_VAR raw_v>&fs=1&border=1" />
 </object>
</div>

using it:
[[!template  id=youtube v="sPq3c5D65hE"]]

gets you:
(not sure if that will show up on a syndicated site) it's also xhtml compliant, unalike the default "share this" text and what's produced by the deprecated embed plugin. it also uses the "no automatic cookies" option provided by youtube. i might try and break out a few more options into parameters the next time i post videos. on a related note, does anyone know a good way to include/display the code of a template on a page? this stuff here is just cut/pasted but it would be nicer to be able to not have to do that to keep it up to date.

18 April 2009

Sandro Tosi: Chain scripts execution in git hooks

The common git hook is something like:
#!/bin/sh

exec "another script"
and since "exec" replace the current script with the other one, only one command can be execute in a hook. Even more, the common hook takes input from the standard input, so the first command takes it, and the second?

The real situation was reportbug post-receive hook, that looked like:
#!/bin/sh

exec /usr/local/bin/git-commit-notice
but I wanted to add seanius' tagpending hook to it. The solution I found was:
#!/bin/sh

# to save stdin on a file to pass to both scripts
cat > post-receive_tmpfile

/usr/local/bin/git-commit-notice < post-receive_tmpfile
/git/reportbug/reportbug.git/hooks/git-post-receive-url-notifications.py < post-receive_tmpfile

rm post-receive_tmpfile
that saves the stdin to a file, passed again to both the scripts.

Of course, I'd be happy to hear more elegant solutions :)

12 April 2009

Sean Finney: a productive weekend for debian stuff

(note that i've just tried enabling the comments feature of ikiwiki, so feel free to use it :) ) (update: okay so i apparently failed to enable comments earlier... but it should be working now :) ) compiz(-fusion) 0.8.2 uploaded to unstable for all you lovers of desktop bling, the latest stable release of compiz-fusion has been uploaded to unstable. new features of note: new one-off script for building compiz packages the biggest problem with trying to maintain compiz packages is that they're for some reason distributed as almost a dozen different source projects upstream, but extremely interdependant wrt their API. therefore updating the packages to a new upstream version is a Big Pain. i've numbed that pain a bit with a new helper script, which should amongst other things make it easier to manage backports to lenny, which i've already seen people requesting. compiz 0.8.2 lenny packages available for testing ..which brings us to our next topic. before i upload them to somewhere a little more official, i figure i could put a call out to teh lazywebs to do some QA work and make sure that it doesn't totally explode on your system (unless you're using the explode animation plugin, in which case it's functioning properly) sources.list entry to try it out on an amd64 lenny system:
deb http://people.debian.org/~seanius/compiz/lenny-backports/amd64 ./

sources.list entry to try it out on an i386 lenny system:
deb http://people.debian.org/~seanius/compiz/lenny-backports/i386 ./

and then apt-get install compiz compiz-fusion-plugins-main (etc) so give it a try and let me know how it goes! pkg-php moving to git with some initial legwork from Mark Hershberger we've migrated to a git repository for the php packaging vcs. due to the long and convoluted history of our svn repository, we decided it wasn't worth the effort of trying to get every single svn commit mapped into the new repo. as a compromise to having to start from scratch, we used the existing branches/tags on top of a series of upstream tarball imports to get an accurate representation of the release history. new git<->bts integration hook for debian packaging i've been dabbling with a new hook to make for more useful integration from a git-buildpackage oriented workflow. in this workflow the changelog is often the last thing prepared, possibly days or even weeks after a fix has been committed for a bug. therefore the standard tagpending and typical changelog-scanning hooks aren't incredibly useful. instead, this hook scans for "Closes:" meta-info in the commits, similar to how "git dch" does, and then sends a notification and/or control commands to the bts. here's an example. note that it also mentions the branch that recieved the fix in the notification, so it's easier to see which branches have a fix at a quick glance. how to use this for your own git repo:
REPO_PATH=/path/to/your/repo.git
HOOK_REPO_PATH=/somewhere/you/want/to/put/it
git clone git://git.debian.org/users/seanius/vcs-hooks/git-hooks.git $HOOK_REPO_PATH
ln -sf $HOOK_REPO_PATH/git-post-receive-url-notifications.py $REPO_PATH/hooks/post-receive

there's a number of configurable options (take a look at the top of the file for some fairly verbose comments), but it should work with some fairly reasonable defaults out of the box. it can also be configured to do the more traditional changelog-scanning, but i'm finding it to be a better workflow to avoid combining changelog entries with the actual fixes (makes them harder to cherry-pick later), and a notification with a link to the changelog entry really isn't that useful beyond what "bts tag nnn pending" can already do. plus i don't like the typical duplication involved in manually managing the changelog. i find it better to generate the changelog via git-dch and automatically get all the "closes:" tags, and then just do a bit of editorial touchups before preparing/releasing the upload. so give it a try if you like, feel free to send feedback/fixes/etc. for example i think the utf-8 support might still be a bit dodgy, as well as possibly the automatic gitweb url detection and changelog scanning.

31 March 2009

Sven Mueller: Link collection 2009/03

Well, I normally despise of thinks like this link collection, but I thought I might add it anyway, since these are useful links for me and if I don t post them here, I m likely to forget where to find them in the near future:
  1. Sean Finney has a nice post about storing the list of parameters a (shell) script got in a way that it can be restored later. Quite handy if your script walks through the arguments parsing them (and consuming them while doing so) but you want to be able to display them in a helpful way if the parsing fails at some point.
  2. A while ago, Ingo J rgensmann had a post that helps retrieving files from lost+found after a filesystem check, provided that you run his helper script on a regular basis. The same approach can also be used if you have a backup of all files, but lost the sorting work you did after the backup was done. This is possible because running the script can be done more often then you would normally do backups.
  3. He also has a small post about mtr oddities when IPv6 comes into play
  4. Adeodato Sim wrote about databases and when timestamps that store the timezone information really are more useful then timestamps that don t.
  5. Adeodato also has a short post on using ssh as a socks proxy, which can be quite handy if you are behind a firewall.
Update: Fixed link to Ingo s file retrieval from lost+found article. Thanks to Patrick Schoenfeld who pointed me at the wrong link.
Also thanks to the anonymous poster who found an alternative way to store and (in a way) restore commandline parameters. The solution doesn t work in an as general way as that by Sean Finney et al., but it is much shorter and therefore interesting for where it can be used (when you control how commandline parameters are processed). See comments on this post for details.

28 March 2009

Sean Finney: quoting and escaping positional parameters, redux

after all, what could be more exciting? okay, so where we last left off, we had something like:
    # escape any single quotes in an argument
    quote() 
      echo "$1"   sed -e "s,','\\\\'',g"
     
    # save up a properly quoted/escaped version of "$@"
    for arg in "$@"; do
      saved="$ saved:+$saved  '$(quote "$arg")'"
    done

i got some really interesting feedback from a few people; Lo c Minier who has some similar code that he cuts and pastes on an as-needed basis, and Ralf Wildenhues who has overseen some similar code that exists in the autoconf system. a notable feature in Lo c's solution was to quote the string entirely within the escaping function, whereas in mine the result was a "quotable" string (i.e. the quotes were put around the output afterwards).
> escape()  
>     echo "$*"   sed "s/'/'\"'\"'/g; s/.*/'&'/"
>  

Ralf pointed out one potential problem with both of these approaches. on the topic of portability, there are some implementations of echo that will directly interpret control characters ('\n', '\t', etc) and others not. he suggested that "printf" be used instead:
- echo "$1"   sed -e "s,','\\\\'',g"
+ printf '%s\n' "$1"   sed -e "s,','\\\\'',g"

though there's no reason i can think of not to take that a bit further and use a herestring, which could save an extra fork if printf isn't a builtin:
    quote() 
      sed -e "s,','\\\\'',g" << EOF
    $1
    EOF
     

also, Ralf suggested in the interest of efficiency that some kind of case statement be used to avoid unnecessary forks when an argument doesn't contain any "'" characters. this is apparently a trick used in autoconf-generated scripts:
    case "$arg" in
    *\'*)
      saved="$ saved:+$saved  '$(quote "$arg")'"
      ;;
    *)
      saved="$ saved:+$saved  '$arg'"
      ;;
    esac

finally, a lingering problem which hasn't been solved on the autoconf side of things (nor in my previous solution) was that in the case that an argument contained trailing newlines, such whitespace would be silently discarded in the quoted version (this is due to how shell command substitution works in general). but really, this is quite the corner case... how often does one run a command line with both an escaped quote and trailing newlines in an argument? an example:
rangda[/home/sean] ./good.sh "isn't very common" "to give args" "with trailing \\n's:
"
original command line, 3 arguments: isn't very common to give args with trailing \n's:
arg: isn't very common
arg: to give args
arg: with trailing \n's:
mangled command line, 0 arguments:
restored command line, 3 arguments: isn't very common to give args with trailing \n's:
arg: isn't very common
arg: to give args
arg: with trailing \n's:
rangda[/home/sean]                                                           :)

however, i noticed that a slightly modified version of Lo c's approach might not suffer from this problem because the string would be quoted before being returned (thus any newlines would be enclosed in quotes that would preserve them). I mentioned this to Ralf who will be bringing it up on the autoconf-patches mailing list... sounds promising anyway. so i give you the combined improved version:
#!/bin/sh -e
# escape any single quotes in an argument and then quote it
quote() 
  sed -e "s,','\\\\'',g; 1s,^,',; \$s,\$,',;" << EOF
$1
EOF
 
# save up a properly quoted/escaped version of "$@"
for arg in "$@"; do
  case "$arg" in
  # when a string contains a "'" we have to escape it
  *\'*)
    saved="$ saved:+$saved  $(quote "$arg")"
    ;;
  # otherwise just quote the variable
  *)
    saved="$ saved:+$saved  '$arg'"
    ;;
  esac
done
echo original command line, $# arguments: "$@"
while test -n "$*"; do echo arg: "$1"; shift; done
echo mangled command line, $# arguments: "$@"
# restore the cmdline
eval set -- "$saved"
echo restored command line, $# arguments: "$@"
while test -n "$*"; do echo arg: "$1"; shift; done

output from the same example cmdline:
rangda[/home/sean] ./better.sh "isn't very common" "to give args" "with trailing \\n's:
"
original command line, 3 arguments: isn't very common to give args with trailing \n's:
arg: isn't very common
arg: to give args
arg: with trailing \n's:
mangled command line, 0 arguments:
restored command line, 3 arguments: isn't very common to give args with trailing \n's:
arg: isn't very common
arg: to give args
arg: with trailing \n's:
rangda[/home/sean]                                                           :)

21 March 2009

Sean Finney: saving, escaping, and restoring positional parameters in a POSIX shell

and now for something completely different... sometimes in a shell script you need to do stuff that modifies the positional parameters ($@, $1, $2, etc) from the cmdline, usually via the builtin shift, maybe from using getopt(1) or similar. for example, it's quite common to see the following in a shell script that parses cmdline flags with getopt
# parse cmdline options with getopt
TEMP= getopt -o h123 -n $0 -- "$@" 
if [ $? != 0 ] ; then usage >&2 ; exit 1 ; fi
# overwrites $@ with getopt-generated/sanitized arguments
eval set -- "$TEMP"
# destructively iterates across $@
while true; do
  case "$1" in
        -h)
          # handle option h
          shift
        ;;
        ...
        --)
          shift
          break
        ;;
      esac
...
done

but sometimes you want to be able to restore the original positional parameters later, or otherwise quote them in a manner where they can be passed along to another script/function, log them, embed them in a string/file, etc. you can't always just stash them in another variable to use that later, because "$@" has a special meaning to the shell that can't otherwise be reproduced with another variable. one way you could try to store/save them would be to save the original copy of $@ and use eval set -- later on to change it back to the saved variable. using a slightly simplified version of the above:
#!/bin/sh
# save the cmdline
saved="$@"
echo original command line, $# arguments: "$@"
while test -n "$*"; do echo arg: "$1"; shift; done
echo mangled command line, $# arguments: "$@"
# restore the cmdline
eval set -- "$saved"
echo restored command line, $# arguments: "$@"
while test -n "$*"; do echo arg: "$1"; shift; done

and at first glance, it seems to work:
rangda[/home/sean] ./bad.sh one two three
original command line, 3 arguments: one two three
arg: one
arg: two
arg: three
mangled command line, 0 arguments:
restored command line, 3 arguments: one two three
arg: one
arg: two
arg: three

but you may run into problems when some parameters have spaces, or quotes:
rangda[/home/sean] ./bad.sh "1: one" "2: two"
original command line, 2 arguments: 1: one 2: two
arg: 1: one
arg: 2: two
mangled command line, 0 arguments:
restored command line, 4 arguments: 1: one 2: two
arg: 1:
arg: one
arg: 2:
arg: two
rangda[/home/sean] ./bad.sh "one's one"
original command line, 1 arguments: one's one
arg: one's one
mangled command line, 0 arguments:
./bad.sh: eval: line 12: unexpected EOF while looking for matching  ''

or worse, shell meta characters:
rangda[/home/sean] ./bad.sh 'i wonder what  pwd  would do'
original command line, 1 arguments: i wonder what  pwd  would do
arg: i wonder what  pwd  would do
mangled command line, 0 arguments:
restored command line, 6 arguments: i wonder what /home/sean would do
arg: i
arg: wonder
arg: what
arg: /home/sean
arg: would
arg: do

the solution is to build up the saved value a bit more manually, single-quoting each parameter (which should prevent any unintended side effects of space/quotes/metacharacters), and escaping existing single quotes inside of parameters.
# escape any single quotes in an argument
quote() 
  echo "$1"   sed -e "s,','\\\\'',g"
 
# save up a properly quoted/escaped version of "$@"
for arg in "$@"; do
  saved="$ saved:+$saved  '$(quote "$arg")'"
done
echo original command line, $# arguments: "$@"
while test -n "$*"; do echo arg: "$1"; shift; done
echo mangled command line, $# arguments: "$@"
# restore the cmdline
eval set -- "$saved"
echo restored command line, $# arguments: "$@"
while test -n "$*"; do echo arg: "$1"; shift; done

let's see how it behaves:
rangda[/home/sean] ./good.sh "1 one" "2: two"
original command line, 2 arguments: 1 one 2: two
arg: 1 one
arg: 2: two
mangled command line, 0 arguments:
restored command line, 2 arguments: 1 one 2: two
arg: 1 one
arg: 2: two
rangda[/home/sean] ./good.sh "one's one"
original command line, 1 arguments: one's one
arg: one's one
mangled command line, 0 arguments:
restored command line, 1 arguments: one's one
arg: one's one
rangda[/home/sean] ./good.sh 'i wonder what  pwd  would do'
original command line, 1 arguments: i wonder what  pwd  would do
arg: i wonder what  pwd  would do
mangled command line, 0 arguments:
restored command line, 1 arguments: i wonder what  pwd  would do
arg: i wonder what  pwd  would do

while not entirely pretty, it does to do the trick. i'd be interested to know if anyone has a better way of doing this (or if there are hidden problems in what i've posted above).

13 November 2008

Sean Finney: (experimental) ps3 packages for debian

when not being distracted by other uses of the system, i've managed to spend some more time on packaging some more stuff that's useful on a debian ps3. namely: the last of which i think will be of the most interest to some folks, as it provides an spu-based xvideo extension, which brings back the possibility of playing HD videos etc on the ps3. now if you're interested in checking out these packages, read on, starting with this: !=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!= ZOMG WARNING!!!1!!ONE!! THESE ARE EXPERIMENTAL BLEEDING EDGE PACKAGES. I MAKE NO GUARANTEE WHETHER THESE PACKAGES ARE PROPERLY BUILT, WILL INSTALL, WILL WORK, WILL NOT DESTROY YOUR SYSTEM, WILL NOT CRAWL OUT OF YOUR SYSTEM AND MAKE LONG DISTANCE PHONE CALLS WHILE EATING THAT SNACK YOU WERE SAVING IN THE FRIDGE, ETC. !=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!=!= okay, so now that we've disposed with that formality, i'm glad if you're interested and still here :) so... where were we? oh yes. put this in your sources.list:
deb http://people.debian.org/~seanius/ps3-experimental-packages sid main

source packages are also available if you want to poke around or backport the packages to etch or ubuntu or something else wierd like that:
deb-src http://people.debian.org/~seanius/ps3-experimental-packages sid main

now, if you have a properly configured system, you should get a warning:
Reading package lists... Done
W: GPG error: http://people.debian.org sid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY CA78CB3E6E76D81D
W: You may want to run apt-get update to correct these problems

but i promise this key is valid, and mine. but hey, you don't have to take my word for it :)
sudo apt-get install debian-keyring
gpg --no-default-keyring --keyring /usr/share/keyrings/debian-keyring.gpg --export --armor CA78CB3E6E76D81D   sudo apt-key add -

then you can install the packages... assuming you want the xorg video driver:
sudo apt-get install xserver-xorg-video-spu

and update the appropriate Device entry in your xorg.conf, so that the normal fbdev is commented out and instead you use the spufbdev driver. example:
Section "Device"
        Identifier      "Generic Video Card"
        Driver          "spufbdev"
        Option "ShadowFB" "false"
EndSection

and then
sudo /etc/init.d/gdm restart

(or the equivalent if you're using something else). that's it! more packages may trickle in here as well, at least until and they find their way into debian proper. i'm also in contact with the upstream authors of this stuff, and they're generally very responsive, so let me know if you have any problems.

26 October 2008

Sean Finney: ps3-kboot for debian

now that ps3-utils is packaged, and it seems spufs didn't take much to get to work, i'd like to turn back to the installation procedure, which was admittedly manual and not exactly as "seamless" as i think it could be. the first step as i see it is to get the bootloader working both generally and also specifically for the installer. by "working", i mean the following: the third of these being the most complicated, as will be detailed below. how a linux (kboot) "otheros" bootloader works on the ps3 okay, so i'm not intimately familiar with the powerpc architecture, nor with all the details of cross-compilation, but this is as much as i can tell. how kboot can be used on the installer including support for this "otheros.bld" bootloader on the installer is fairly easy from a technical point of view, whether it's a usb disk or netinst cdrom image. basically, in either case the media should have a directory called "ps3/otheros", in which the file is placed and named "otheros.bld". when running the PS3 in game-os mode, this is where it looks when you tell it to "scan for otheros". how kboot can be used/maintained, generally speaking beyond a kboot.conf file, nothing really needs to be done on the host os to use kboot. of course, it would be nice to be able to issue updates for an image, which should be possible with a package that makes use of ps3-flash-util from the ps3-utils package. nothing too difficult/complicated there. how kboot can be used/built within debian this is the tricky part. in additional to the standard Free Software guidelines which debian is so (in)famous for following, we also have some basic QA rules that are needed from a distribution release management / security perspective. one of these rules is that generally speaking it is considered taboo to include embedded copies of other software within a package, if that software is already available in debian. for example, i don't think it's very likely that the ftp-masters will look very kindly on some new software package that includes a copy of the entire kernel source tree, the gcc compiler, (uc)libc, busybox, coreutils, udev, etc. it may be that a certain subset will require special patches, in which case some convincing/justification will be in order, but for the big ones (i.e. the kernel) there will almost certainly need to be modifications made. ps3-kboot in ubuntu there are already source packages available for ps3-kboot in ubuntu. apart from the fact that i can't stand working with packages that use cdbs, this package can't be used because the package maintainers' solution to the previously mentioned problem is basically to ignore that it exists. it also seems that there's quite a bit of customization in this source package with respect to the initrd generation, which is probably worth some review... so i will likely still use this package as a base point of reference.

22 October 2008

Sean Finney: SPU support on the ps3 in debian

so i've been looking into the state of things with running debian on my ps3, and while talking to some folks on #cell, it was brought to my attention that there was a problem or two with the libspe2 packages, used for programming SPU using applications. for those not in the loop, the Cell Broadband Engine chip which powers the ps3 is a multi-core processor, with two general purpose PPU's and 6 specialized SPU's available. Without spufs support, the SPU's aren't available, and the real number-crunching capabilities of the ps3 aren't either. more information on the nitty-gritty of spufs is available here anyway, the problems are fixed now :) still, as of now there's still some one-time setup after installation. namely:
    echo spu    /spu    spufs   defaults    0   0 >> /etc/fstab

(and then mount it or just reboot) there really should be some way to have this setup automatically, either from the installer or through some kind of spufs support package. i'll take it up on the mailing lists and see what the consensus would be for this.

Next.