Search Results: "pkern"

23 August 2020

Philipp Kern: Self-service buildd givebacks now use Salsa auth

As client certificates are on the way out and Debian's SSO solution is effectively not maintained any longer, I switched self-service buildd givebacks over to Salsa authentication. It lives again at https://buildd.debian.org/auth/giveback.cgi. For authorization you still need to be in the "debian" group for now, i.e. be a regular Debian member.For convenience the package status web interface now features an additional column "Actions" with generated "giveback" links.Please remember to file bugs if you give builds back because of flakiness of the package rather than the infrastructure and resist the temptation to use this excessively to let your package migrate. We do not want to end up with packages that require multiple givebacks to actually build in stable, as that would hold up both security and stable updates needlessly and complicate development.

29 December 2016

Philipp Kern: Automating the installation of Debian on z/VM instances

I got tired of manually fetching installation images via either FTP or by manually transferring files to z/VM to test s390x installs. Hence it was about time to automate it. Originally I wanted to instrument an installation via vmcp from another instance on the same host but I figured that I cannot really rely on a secondary instance when I need it and went the s3270/x3270-script way instead.

The resulting script isn't something I'm particularly proud of, especially as it misses error handling that really should be there. But this is not expect instead you operate on whole screens of data and z/VM is not particularly helpful in telling you that you just completed your logon either. Anyway, it seems to work for me. It downloads the most recent stable or daily image if they are not present yet, uploads them via DFT to CMS and makes sure that the installation does not terminate when the script disconnects. Sadly DFT is pretty slow, so I'm stuck with 70 kB/s and about five minutes of waiting until kernel and initrd are finally uploaded. Given that installations themselves are usually blazingly fast on System z, I'm not too annoyed by that, though.

I previously wrote about a parmfile example that sets enough options to bring debian-installer to the point of a working network console via SSH without further prompting. It's a little unfortunate that s390-netdevice needs to be preseeded with the hardware addresses of the network card in all cases, even if only one is available. I should go and fix that. For now this means that the parmfile will be dependent on the actual VM system definition. With that in mind there is an example script in the same gist that writes out a parmfile and then calls the reinstall script mentioned above. Given that debian-installer now supports HTTPS (so far only in the daily images) you can even do a reasonably secure bootstrapping of the network console credentials and preseeding settings.

If you put this pretty generic preseed configuration file onto a securely accessible webserver and reference it from the parmfile, you can also skip the more tedious questions at the beginning of debian-installer. A secure transport is encouraged as preseed files can do anything to your installation process. Unfortunately it seems that there is no way to preseed SSH keys for the resulting installation yet, neither for the created user nor for root. So I haven't achieved my desired target of a fully automated installation just yet. Debian's Jenkins setup just went with insecure defaults, but given that my sponsored VMs are necessarily connected to the public Internet that seemed like a bad idea to me. I suppose one way out would be to IP/password ACL the preseed file. Another one to somehow get SSH key support into user-setup.

4 October 2015

Philipp Kern: Root on LVM on Debian s390x, new Hercules


Two s390x changes landed in Debian unstable today:
With this it should be possible to install Debian on s390x with root on LVM. I'd be happy to hear feedback about installations with any configuration, be it root on a single DASD or root on LVM. Unless you set both mirror/udeb/suite and mirror/suite to unstable you'll need to wait until the changes are in testing, though. (The debian-installer build does not matter as zipl-installer is not part of the initrd and sysconfig-hardware is part of the installation.)

Furthermore I uploaded a new version of Hercules - a z/Architecture emulator - to get a few more years of maintenance into Debian. See its upstream changelog for details on the changes (old 3.07 new 3.11).

At this point qemu at master is also usable for s390x emulation. It is much faster than Hercules, but it uses newfangled I/O subsystems like virtio. Hence we will need to do some more patching to make debian-installer just work. One patch for netcfg is in to support virtio networking correctly, but then it forces the user to configure a DASD. (Which would be as wrong if Fibre Channel were to be used.) In the end qemu and KVM on s390x look so much like a normal x86 VM that we could drop most of the special-casing of s390x (netcfg-static instead of netcfg; network-console instead of using the VM console; DASD configuration instead of simply using virtio-blk devices; I guess we get to keep zIPL for booting).

19 September 2015

Philipp Kern: Working with z/VM DIRMAINT as a mere user

Two helpful CMS commands to know if the users on the z/VM host you connect to are managed using DIRMAINT:

REVIEW is something I always expire from my mind after a few weeks of not using a mainframe. And then I do not usually find it quickly, even knowing where to look. (For some reason GET won't work unless you are a privileged user of the system.)

I guess by now most of the systems use DIRMAINT, but it's an IBM product that requires a separate license in addition to z/VM. Hence on some systems the user directory is still maintained by hand. In this case the password is written verbatim into the file and the administrator needs to change it manually for you.

30 August 2015

Philipp Kern: Automating the 3270 part of a Debian System z install

If you try to install Debian on System z within z/VM you might be annoyed at the various prompts it shows before it lets you access the network console via SSH. We can do better. From within CMS copy the default EXEC and default PARMFILE:

COPYFILE DEBIAN EXEC A DEBAUTO EXEC A
COPYFILE PARMFILE DEBIAN A PARMFILE DEBAUTO A

Now edit DEBAUTO EXEC A and replace the DEBIAN in 'PUNCH PARMFILE DEBIAN * (NOHEADER' with DEBAUTO. This will load the alternate kernel parameters file into the card reader, while still loading the original kernel and initrd files.

Replace PARMFILE DEBAUTO A's content with this (note the 80 character column limit):

ro locale=C
s390-netdevice/choose_networktype=qeth s390-netdevice/qeth/layer2=true
s390-netdevice/qeth/choose=0.0.fffc-0.0.fffd-0.0.fffe
netcfg/get_ipaddress=<IPADDR> netcfg/get_netmask=255.255.255.0
netcfg/get_gateway=<GW> netcfg/get_nameservers=<FIRST-DNS>
netcfg/confirm_static=true netcfg/get_hostname=debian
netcfg/get_domain=
network-console/authorized_keys_url=http://www.kern.lc/id_rsa.pub
preseed/url=http://www.kern.lc/preseed-s390x.cfg

Replace <IPADDR>, <GW>, and <FIRST-DNS> to suit your local network config. You might also need to change the netmask, which I left in for clarity about the format. Adjust the device address of your OSA network card. If it's in layer 3 mode (very likely) you should set layer2=false. Note that mixed case matters, hence you will want to SET CASE MIXED in xedit.

Then there are the two URLs that need to be changed. The authorized_keys_url file contains your SSH public key and is fetched unencrypted and unauthenticated, so be careful what networks you traverse with your request (HTTPS is not supported by debian-installer in Debian).

preseed/url is needed for installation parameters that do not fit the parameters file - there is an upper character limit that's about two lines longer than my example. This is why this example only contains the bare minimum for the network part, everything else goes into this preseeding file. It file can optionally be protected with a MD5 checksum in preseed/url/checksum.

Both URLs need to be very short. I thought that there was a way to specify a line continuation, but in my tests I was unable to produce one. Hence it needs to fit on one line, including the key. You might want to use an IPv4 as the hostname.

To skip the initial boilerplate prompts and to skip straight to the user and disk setup you can use this as preseed.cfg:

d-i debian-installer/locale string en_US
d-i debian-installer/country string US
d-i debian-installer/language string en
d-i time/zone US/Eastern
d-i mirror/country manual
d-i mirror/http/mirror string httpredir.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string

I'm relatively certain that the DASD disk setup part cannot be automated yet. But the other bits of the installation should be preseedable just like on non-mainframe hardware.

23 May 2015

Eddy Petri&#537;or: Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 1

About 2 months ago I set a goal to run some kind of BSD on the spare Linksys NSLU2 I had. This was driven mostly by curiosity, after listening to a few BSDNow episodes and becoming a regular listener, but it was a really interesting experience (it was also somewhat frustrating, mostly due to lacking documentation or proprietary code).

Looking for documentation on how to install any BSD flavour on the Linksys NSLU2, I have found what appears to be some too-incomplete-to-be-useful-for-a-BSD-newbie information about installing FreeBSD, no information about OpenBSD and some very detailed information about NetBSD on the Linksys NSLU2.

I was very impressed by the NetBSD build.sh script which can be used to cross-compile the entire NetBSD system - to do that, it also builds the appropriate toolchain - NetBSD kernel and the base system, even when ran on a Linux host. Having some experience with cross compilation for GNU/Linux embedded systems I can honestly say this is immensely impressive, well done NetBSD!

Gone were a few failed attempts to properly follow the instruction and lots of hours of (re)building, but then I had the kernel and the sets (the NetBSD system is split into several parts which are grouped by functionality, these are the sets), so I was in the position to have to set things up to be able to net boot - kernel loading via TFTP and rootfs on NFS.

But it wouldn't be challenging if the instructions were followed to the letter, so the first thing I wanted to change was that I didn't want to run dhcpd just to pass the DHCP boot configuration to the NSLU2, that seemed like a waste of resources since I already had dnsmasq running.

After some effort and struggling with missing documentation, I managed to use dnsmasq to pass DHCP boot parameters to the slug, but also use it as TFTP server - after some time I documented this for future reference on my blog and expect to refer to it in the future.

Setting up NFS wasn't a problem, but, when trying to boot, I found that I managed to misread at least 3 or 4 times some of the NSLU2 related information on the NetBSD wiki. To be able to debug what was happening I concluded the slug should have a serial console attached to it, which helped a lot.

Still the result was that I wasn't able to boot the trunk version of the NetBSD code on my NSLU2.

Long story short, with the help of some people from the #netbsd IRC channel on Freenode and from the port-arm NetBSD mailing list I found out that I might have a better chance with specific older versions. In practice what really worked was the code from the netbsd_6_1 branch.

Discussions on the port-arm mailing list, some digging into the (recently found) PR (problem reports), and a successful execution of the trunk kernel (at the time, version 7.99.4) together with 6.1.5 userspace lead me to the conclusion the NetBSD userspace for armbe was broken in the trunk branch.

And since I concluded this would be a good occasion to learn a few details about NetBSD, I set out to git bisect through the trunk history to identify when this happened. But that meant being able to easily load kernels and run them from TFTP, which was not how the RedBoot bootloader flashed into the slug behaves by default.

Be default, the RedBoot bootloader flashed into the NSLU2 waits for 2 seconds for a manual interaction (it waits for a ^C) on the serial console or on the telnet RedBoot prompt, then, if no such event happens, it copies the Linux image it has in flash starting with adress 0x50060000 into RAM at address 0x01d00000 (after stripping the Sercomm header) and then executes the copied code from RAM.

Of course, this is not a very handy way to try to boot things from TFTP, so my first idea to overcome this limitation was to use a second stage bootloader which would do the loading via TFTP of the NetBSD kernel, then execute it from RAM. Flashing this second stage bootloader instead of the Linux kernel at 0x50060000 would make sure that no manual intervention except power on would be necessary when a new kernel+userspace pair is ready to be tested.

Another advantage was that I would not risk bricking the NSLU2 since I would not be changing RedBoot, the original bootloader.

I knew Apex was used as the second stage bootloader in Debian, so I started configuring my own version of the APEX bootloader to make it work for the netbsd-nfs.bin image to be loaded via TFTP.

My first disappointment was that Apex was did not support receiving the boot parameters via DHCP, but only via RARP (it was clear it was less tested with BOOTP or DHCP) and TFTP was documented in the code as being problematic. That meant that I would have to hard code the boot configuration or configure RARP, but that wasn't too bad.

Later I found out that I wasted time on that avenue because the network driver in Apex was some Intel code (NPE Access Library) which can't be freely distributed, but could have been downloaded from Intel's site back in 2008-2009. The bad news was that current versions did not work at all with the old patch work that was done in Apex to allow for the driver made for Linux to compile in a world of its own so it could be incorporated in Apex.

I was stuck and the only options I were:
  1. Fight with the available Intel code and make it compile in Apex
  2. Incorporate the NPE driver from NetBSD into a rump kernel which will be included in Apex, since I knew the NetBSD driver only needed a very easily obtainable binary blob, instead of the entire driver as was in Apex before
  3. Hack together an Apex version that simulates the typing of the necessary commands to load the netbsd-nfs.bin image inside RedBoot, or in other words, call from Apex the RedBoot functions necessary to load from TFTP and execute NetBSD.
Option 1 did not look that appealing after looking into the horrible Intel build system and its endless dependencies into a specific Linux kernel version.

Option 2 was more appealing, but since I didn't knew NetBSD and only tried once to build and run a NetBSD rump kernel, it seemed like a doable project only for an experienced NetBSD developer or at least an experienced NetBSD user, which I was not.

So I was left with option 3, which meant I had to do some reverse engineering of the code, because, although RedBoot is GPL, Linksys did not publish the source from which the running RedBoot was built from.


(continues here)

18 February 2015

Philipp Kern: Caveats of the HP MicroServer Gen8

If you intend to buy the HP MicroServer Gen8 as a home server there are a few caveats that I didn't find on the interwebs before I bought the device:
  • Even though the main chassis fan is now fixed in AHCI mode with recent BIOS versions, there is still an annoying PSU fan that's tiny and high frequency. You cannot control it and the PSU seems to be custom-built.
  • The BIOS does not support ACPI S3 (suspend-to-RAM) at all. Apparently it being a server BIOS they chose to not include the code paths in the BIOS needed to properly turn off devices and turn them back on. This means that it's not possible to simply suspend it and have it woken up when your media center boots.
  • In contrast to the older AMD-based MicroServers the Gen8 comes with iLO, which will consume quite a few watts just for being present even if you don't use it. I read figures of about ten watts. It also cannot be turned off, as it does system management like fan control.
  • The HDD cages are not vibration proof or decoupled.
If you try to boot FreeBSD with its zfsloader you will likely need to apply a workaround patch, because the BIOS seems to do something odd. Linux works as expected.

13 October 2014

Philipp Kern: pbuilder and pam_tmpdir

It turns out that my recent woes with pbuilder were all due to libpam-tmpdir being installed (at least two old bug reports exist about this issue: #576425 and #725434). I rather like my private temporary directory that cannot be accessed by other (potential) users on the same system. Previously I used a hook to fix this up by ensuring that the directory actually exists in the chroot, but somehow that recently broke.

A rather crude but working solution seems to be "session required pam_env.so user_readenv=1" in /etc/pam.d/sudo and "TMPDIR=/tmp" in /root/.pam_environment. One could probably skip pam_tmpdir.so for root, but I did not want to start fighting with pam-auth-update as this is in /etc/pam.d/common-session*.

29 November 2013

Philipp Kern: On PDiffs Usefulness for Stable

Axel just said that PDiffs are really useful for stable because it changes seldom. The truth is that there are no PDiffs for stable. I consider this a bug because you need to download a huge blob on every point release even though only few stanzas changed.

The real problem of PDiffs is the count that needs to be applied. We run "dinstall" much more often now than we used to and we cannot currently skip diffs in the middle when looking at a dak-maintained repository. It would be more useful if you could fetch a single diff from some revision to the current state and apply this instead of those (download, apply, hash check)+ cycles. 56 available index diffs are also not particularly helpful in this regard. They cover 14 days with four dinstalls each, but it takes a very long time to apply them, even on modern machines.

I know that there was ongoing work improving PDiffs that wanted to use git to generate the intermediate diffs. The current algorithm used in dak does not require old versions except one to be kept, but keeping everything is the far other end of the spectrum where you have a git repository that potentially gets huge unless you rewrite it. It should be possible to just implement a rolling ring buffer of Packages files to diff against, reprepro already generates the PDiffs in the right way, the Index format is flexible enough to support this. So someone would just need to make that bit of work in dak.

16 September 2013

Philipp Kern: buildd.debian.org

Did you notice that it was gone? It's back now. Phew. Apparently an upgrade to wheezy was scheduled for the passed week-end. This went badly with the RAID controller acting up after the upgrade. So the move to a new machine (or rather VM) was sped up. And because various policies changed, we needed to move to a dedicated database server with a new authentication scheme, get rid of the /org symlink, get rid of the buildd_* accounts, cope with a new PHP, cope with rlimits that broke a Python CGI (thanks, KiBi!), triggers that weren't tickled, and various other stuff. Thanks to DSA for handling the underlying infrastructure despite some communication issues during heavy fire fighting.

14 August 2013

DebConf team: DebConf13 video recordings and hires streams (Posted by Holger Levsen and the DebConf videoteam)

This is just a quick update about the video streams of DebConf13 which are available at the following URLs: /me bows to this is pretty fantatic results - and, the view is also fantastic! Especially from down at the lake :-)
P.S.: I ve you missed some technical details in the systemd talk from Lennart, these are probably in here.

3 March 2013

Philipp Kern: git-annex: encrypted remotes

Due to the data loss I blogged about, I had to reverse engineer the encryption used by git-annex for its encrypted special remotes. The file system on which the content lived has a bullet hole of 8 GB in it, which was helpfully discarded by pvmove. It's pretty unhappy about that fact, parts of the git repository are unusable and directories cannot be accessed anymore. git-annex cannot possibly run anymore.

However, I was still able to access the git-annex branch within said git repository (using porcelain). This branch contains a file called remote.log which contains the keys of the special remotes. There's one per remote, encrypted to a GPG key of your choice and all files within that remote are encrypted with the same symmetric key.

One small detail stopped me from getting the decryption right the first time, though. It seems that git-annex uses randomness generated by GPG and armored into base64. In my na vet I spotted the base64 and decoded it. Instead it's used verbatim: the first 256 bytes as HMAC key (which reduces randomness to 192 bytes) and the remaining bytes for the symmetric key used by GPG (which will do another key derivation for CAST5 with it). A bug about that just hit the git-annex wiki.

With that knowledge in mind I wrote a little tool that's able to generate encrypted content keys from the plain ones used in the symlinks. That helps you to locate the file in the encrypted remote. Fetch it and then use the tool to decrypt the file in question with the right key.

The lesson: Really backup the git repository used with git-annex and especially remote.log. I'm now missing most of the metadata but for some more important files it's luckily still present. Recovery of the file content does not depend on it if you can deduce the filename from the content. If you have many little files it might be a bit futile without it, though.

2 March 2013

Philipp Kern: PSA: LVM, pvmove and SSDs

If you use LVM with Wheezy on a solid-state drive, you really want to install the latest lvm2 update (i.e. 2.02.95-6, which contains the changes of -5). Otherwise, if you set issue_discards=1 in /etc/lvm/lvm.conf, you will experience severe data loss when using pvmove. Happened to me twice, once I didn't care (chroot data being lost), the second time (today) I did. Not fun, especially when the backup of the data was scheduled for the same day.

One has to wonder why it takes three months for a bug that trashes data to reach testing. (Obviously I know the answer, but they're not particularly good reasons.) Other distributions, like Ubuntu, were much quicker to notice and incorporate that fix. And in the case of the named distribution not because they auto-synced it from unstable. If somebody notices such a grave bug, please yell at people to get the fix out there to our users. Thanks.

8 February 2013

Philipp Kern: Mozilla's direction

Am I the only one who's disappointed with the route Mozilla's taking and left wondering what the direction is? First they killed off the development of Thunderbird because, as we all know, people mainly use webmail these days. Then they presented us their view that the big Certificate Authorities are too big to fail, as CAs gravely violated our trust (c.f. Trustwave and their MitM authority). And "now" they're also blocking the introduction of new formats into their browser because they cannot be the one who innovates. Instead Microsoft and Apple obviously need to take the lead in introducing a format into their browsers because otherwise it wouldn't be useful. Even though it's safe to say that Chrome and Firefox make up for more than half of the desktop browser market share. It might be that Chrome's nibbling from Firefox's, still IE seems to be in decline and Safari is rather a further mention than something many people would care strongly about.

There were of course some valid reasons for not supporting WebP yet. But most of them got fixed in the meantime and all we hear is the referal to proprietary vendors who need to move first. If I'd want to depend on such vendors I'd go with proprietary operating systems. (Having to deal with hardware products of proprietary vendors at $dayjob is enough.) So what's up Mozilla? The solution is to ignore your users and tag bugs with patches wontfix?

The only real advantage of Firefox over Chromium these days is the vast amount of plugins and extensions (e.g. Pentadactyl, for which there is no real equivalent available). Another sad fact is that you need to pull Firefox from a 3rd party repository (even though packages are coming from the 2nd party) to get a current version onto your Debian system to work with the web. But then it's not Mozilla who's to blame here. Maybe we should've introduced one Iceweasel version that's allowed to have reverse-dependencies and one that cannot.

(This post might contain hyberboles, which should be considered as such.)

22 November 2012

Philipp Kern: OpenClonk

The Free and Open successor to the Clonk computer game series OpenClonk is now available in Debian sid and Ubuntu raring: just use sudo apt-get install openclonk to install it (backports to Ubuntu precise and quantal can be found in the OpenClonk Releases PPA).

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="236" src="https://www.youtube.com/embed/ykJZ9WsevGE" width="420">OpenClonk Beyond the Rocks Trailer</iframe>

It's still under development, but as you can see it's also already quite functional. The engine was made available freely under the ISC license after the original author of Clonk was no longer interested in creating new successors. Many of the old volunteer contributors then picked it up. Be aware that same computer multiplayer, which was available in the older parts of the series, is gone. If you play on the network, you need to ensure that both of you have the same version of the game.

2 October 2012

Philipp Kern: How to maintain an OpenWRT installation

I wondered how to properly maintain an OpenWRT installation that needs to track trunk for one reason or another. For instance the OpenWRT release pace is quite slow and "new" devices or various fixes are only available in this particular branch.

Now the OpenWRT project does offer snapshot builds. These are images built with the default config for the particular device type. You can even upgrade most models using sysupgrade, which preserves a list of configuration files you specify when flashing the new image, instead of doing a dreadful tftp upload. But you still lose all the packages you installed and accumulate a bit of cruft when reinstalling them. If you do not flash new images, you will be unable to install kernel modules after a while, because the kernel version they're based upon was updated.

So what's suggested is this:
  • You checkout the trunk from svn. You enable all the feeds you need (e.g. packages and luci) and install the packages you want. (See scripts/feeds.)
  • Run make menuconfig and select the target system type and the device profile.
  • make defconfig will now give you the default configuration for the selected profile. It is important to use this as the starting point.
  • Again invoke menuconfig and select all the packages you need. You can get the current list of packages installed on your device with opkg list_installed. You can search in menuconfig as you would in a kernel build config screen.
  • If you're happy with the configuration, save the difference to the default configuration to a file: scripts/diffconfig.sh > my-own-config
  • Now start the build with make. With the package configuration I picked I need about 5 GB of disk space. It will start off compiling a toolchain, the kernel and then the packages you selected. If something seems to go wrong, you probably want to restart the build with make V=s to see the build messages (the default is pretty quiet). Some -jX option for parallel building might shorten your waiting time.
  • Finally you will get an image in bin/<system type>. You want to grab the squashfs & sysupgrade image.
  • Check /etc/sysupgrade.conf on the live system if files are missing. /etc/config, passwd, shadow and some other files are preserved even if they are not listed. Those are then the only files that are copied into a temporary ramdisk during upgrade and written back to the filesystem post-flash. You need to keep this file up-to-date if you include new packages that have non-UCI configuration.
  • For actually executing the upgrade process, you can either use the luci web interface or start sysupgrade over SSH. For the latter you will have to copy the image yourself. If you go via luci, you should check the md5 hash presented to you against the one of the image on your disk.
  • If you want to update your image, just svn update (in the hope that the OpenWRT developers did not break anything in trunk), get the new defconfig, append your saved configuration delta (the file "my-own-config" above) to .config and run make oldconfig. This will keep track of feature changes in the default configuration (like shadow password support).
The advantage of building an image that includes all the packages is that you don't get to reinstall them and due to the use of squashfs, all files provided by those packages are actually kept in compressed form in flash. The image I built has a size of about 7 MB and contains "heavyweight" packages like isc-dhcpd, bind, openvpn (using openssl), bash, vim-tiny, tcpdump-mini, luci, and a webserver with TLS support.

As always: Take great care when you're flashing devices. The devices I bought cannot be bricked, but this might not be true for all devices that run OpenWRT. You might need and go doing a tftp image upload if something goes wrong. Check the OpenWRT wiki for instructions and possibly make your computer ready for them before starting to play with your box. For my Buffalo WZR-HP-AG300H the sysupgrade procedure is stable.

23 September 2012

Philipp Kern: Call for testing: Upcoming Squeeze point release 6.0.6

We just sent out a new call for testing for the next point release of Debian Squeeze. The last one was back in May, hence there are a bunch of updates. Please test the packages in squeeze-proposed-updates on some machines running squeeze if possible, so that we don't screw up your production machines with bad updates in a week. The point release is scheduled for September 29th, i.e. next Saturday. Don't forget to copy the debian-release mailing list when you encounter regressions. Thanks for your efforts.

If you want to receive these notices by mail, please subscribe to the debian-stable-announce mailing list.

21 September 2012

Philipp Kern: IPv6 support in debian-installer, take 2

The IPv6 patch set for netcfg (part of debian-installer) has landed in Debian unstable. In follow-up uploads I diverged from the Ubuntu patch set a little bit:
  • I dropped the use of DUID-LL as the client identifier used by the DHCPv6 client. While basically a good idea, because it's predictable, it's also against the RFC to do that. We simply don't know if the network interface is permanently attached to the device. If the NIC that's used for DUID-LL generation is reused for installation in another box, the same identifier would be generated and stored on the machine. That's what DUID-LLT (the default in other software) avoids. As much as I loathe the replacement of the current MAC address-based scheme of DHCPv4 with DUID-LLT, it seems to be more correct to use that instead of DUID-LL, because that identifier is designed to be decoupled from the presence of the NIC. If you want predictable addressing, just enable SLAAC in your broadcast domain and use the EUI-64 address, which already contains the current MAC address.
  • If SLAAC is used, netcfg will activate IPv6 privacy extensions in the installed system by default. They can be turned off by editing /etc/network/interfaces post-installation. This will only affect outgoing traffic, incoming traffic can be addressed at the stable EUI-64 address. (We don't use a randomized interface identifier like Windows does.)
Thanks to Michael Tokarev's recent upload of busybox, we now also have a more featureful ping and ping6 in debian-installer's environment.

Known bugs:
  • As far as I can see, the DUID generated in the debian-installer environment is not copied into the installed system. Given the low prevalence of stateful DHCPv6 for servers I don't consider this a blocker, though.
  • If stateful DHCPv6 is requested by the router advertisement (other config flag being set), the DHCPv6 client will not time out and continuously retry to get an address.
  • According to Debian bug #688273 preseeding of netcfg/use_autoconfig does not work correctly. netcfg/disable_dhcp needs to be re-added as a deprecated preseeding option and mapped onto the use_autoconfig value until somebody comes up with a better scheme.
  • We should state in the installation guide that ftp.ipv6.debian.org and http.debian.net can both be used for IPv6-only installation. The installer does not show in the mirror list if a mirror is IPv6-capable.
If you could take another look at a current d-i daily, install a system with it and look if the installed system has a correct network configuration, that would still be appreciated. I also need somebody to test the GNU/kFreeBSD image. I fixed it up to build there but the result might be entirely wrong.

My sincere thanks go to Bernhard Schmidt and Karsten Merker, who successfully completed installations over IPv6 in various environments!

3 September 2012

Philipp Kern: IPv6 support in debian-installer

I tried to continue netcfg's journey to support IPv6 in debian-installer. Matt Palmer wrote a large patch set many moons ago (kudos!) and Colin Watson polished it and included it into Ubuntu. The reason for it not being merged into Debian first was ifupdown. Version 0.7 finally supports IPv6 and only entered Debian unstable by the end of May. Due to a bunch of changes to netcfg, one reason being a Summer of Code project on it, I had to forward-port those 50 patches and add two bug fixes on top. Hence it's possible that I introduced some breakage, even if the result works well for me (apart from one DHCPv6 oddity that cannot be seen from within KVM). The tree can be found in the people/pkern/ipv6 branch. I was unable to check if Ubuntu has introduced any additional patches on top of the old patch set, as I'm not used enough to bzr.

This mini.iso (detached signature) netinst image contains a debian-installer current as of today and two additional patched udebs: netcfg with the aforementioned IPv6 patch set and busybox with ping6 enabled. I'd appreciate if you could go and test it in one (or more) of the following environments: IPv4-only (DHCPv4 / static), wireless (IPv4 / IPv6 as desired, there has been some refactoring), stateless IPv6 autoconfiguration (it even supports RDNSS and DNSSL), stateless IPv6 autoconfiguration with stateless DHCPv6, and stateful DHCPv6. It will try to configure dual stack if possible. Please note that many Debian mirrors are not yet IPv6-enabled. Even prominent ftp.CC.debian.org hosts like ftp.de.debian.org still do not support it. So you'll have to pick one carefully if you want to try IPv6-only installation. (Which worked for me!)

If you have any feedback for me, please leave it here as a comment or mail me at pkern@d.o. Thanks!

14 August 2012

Philipp Kern: Debian s390x installation now working

With a current daily build of debian-installer you should now be able to install a VM with Debian s390x. I'm told that even installation within hercules works again. d-i's beta 1 for wheezy has a broken busybox and is hence unusable. But there were more issues:
  • zipl-installer was blacklisted from building on s390x by Packages-arch-specific which caused nobootloader to be selected during installation.
  • base-installer was unable to pick the correct kernel image due to it not having any rules for this architecture.
  • A netcfg fix was needed that removes a 50s timeout while d-i tries to arping the configured gateway. This check seems to be pretty new and if it fails the link is still considered up and the installation continues. The network driver commonly used on s390(x) (qeth) uses layer 3 configuration by default, which means that ARP is meaningless. The IPv4 and IPv6 addresses are configured in the network adapter which then does all the ARP if necessary (i.e. not within the same machine).
The resulting installation even boots. But it shares some problems with squeeze's s390 port:
  • udev's persistent network device naming is broken. It includes a volatile device ID that is incremented with each reboot. This is now tracked in Debian bug #684766.
  • The TERM variable is set to linux during boot-up instead of dumb (the s390(x) terminals are either line- or form-based, except for one HMC ASCII terminal). Together with the new fancy coloured LSB init output in wheezy the screen gets even more noisy with all those escape sequences. It's yet unclear where this should be fixed. The kernel provides the init system with a TERM=linux default and it's not yet fixed up afterwards (either in initramfs-tools, or busybox, or sysvinit).
Tip of the day: If you see funny characters in x3270, then you maybe selected the wrong EBCDIC code page. If you use Linux you want to select CP1047 instead of the default CP037 in Options Character Set ( Euro). That's the target of the in-kernel ASCII to EBCDIC converter.

Next.