Search Results: "Michael Casadevall"

1 October 2013

Michael Casadevall: Spanish or Bust: Attaining fluency in 90 days or else ...

"How do you have an adventure?"

"You take a stupid idea, and follow through ..."

Perhaps, I've truly lost my mind for once, but I've decided to embark on a personal project to try and achieve some fluency in Spanish in just 90 days, and posting it to the internet to try and keep me honest and on track towards doing so. I will post future updates as I go, and would welcome help from anyone who would be interested in helping!

The cost of failure: being forced to praise Wndows 8 in a future video, and run it for one week on my laptop.



(I apologize for the quality, but I finally managed to find the nerve to record myself doing this, and just uploaded before I lost it again)


22 June 2012

Michael Casadevall: Announcement of Calxeda Highbank Images for Quantal

Hello all,

As many of you are aware, Canonical, in coordination with Calxeda and others have been working to bring Ubuntu to this new class of high-performance cluster yet low power-consumption computers built around ARM processors. Many of you who were in attendance at UDS in Oakland may remember seeing Calxeda's talks and demonstration live, and the exciting news that this represents. The full presentation is available here.

In line of this work, I'm am extremely pleased to announce that the initial images for the Calxeda Highbank platform are now available for download, with installation instructions available here. Please remember that Quantal is still in alpha development, and is not currently recommended for use in a production environment. As development of 12.10 continues, we will continue to refine these images, and our tools to fully embrace MAAS on ARM, and make 12.10 to be our best release yet.


As an additional note, Highbank support for Ubuntu 12.04 LTS will be released as part of the 12.04.1 update in mid-August and will join our support for ArmadaXP from Marvell, which was released as part of 12.04.

---
Michael Casadevall
ARM Server Tech Lead
Professional Engineering and Services, Canonical
michael.casadevall@canonical.com

28 December 2011

Russell Coker: Secure Boot and Protecting Against Root

There has been a lot of discussion recently about the recent Microsoft ideas regarding secure boot, in case you have missed it Michael Casadevall has written a good summary of the issue [1]. Recently I ve seen a couple of people advocate the concept of secure boot with the stated idea that root should be unable to damage the system, as Microsoft Software is something that doesn t matter to me I ll restrict my comments to how this might work on Linux. Restricting the root account is something that is technically possible, for much of the past 9 years I have been running SE Linux Play Machines which have UID 0 (root) restricted by SE Linux such that they can t damage the system [2] there are other ways of achieving similar goals. But having an account with UID 0 that can t change anything on the system doesn t really match what most people think of as root , I just do it as a way of demonstrating that SE Linux controls all access such that cracking a daemon which runs as root won t result in immediately controlling the entire system. As an aside my Play Machine is not online at the moment, I hope to have it running again soon. Root Can t Damage the System One specific claim was that root should be unable to damage the system. While a secure boot system can theoretically result in a boot to single user mode without any compromise that doesn t apply to fully operational systems. For a file owned by root to be replaced the system security has to be compromised in some way. The same compromise will usually work every time until the bug is fixed and the software is upgraded. So the process of cracking root that might be used to install hostile files can also be used at runtime to exploit running processes via ptrace and do other bad stuff. Even if the attacker is forced to compromise the system at every boot this isn t a great win for the case of servers with months of uptime or for the case of workstations that have confidential data that can be rapidly copied over the Internet. There are also many workstations that are live on the Internet for months nowadays. Also the general claim doesn t really make sense on it s own. root usually means the account that is used for configuring the system. If a system can be configured then the account which is used to configure it will be able to do unwanted things. It is theoretically possible to run workstations without external root access (EG have them automatically update to the latest security fixes). Such a workstation configuration MIGHT be able to survive a compromise by having a reboot trigger an automatic update. But a workstation that is used in such a manner could be just re-imaged as it would probably be used in an environment where data-less operation makes sense. An Android phone could be considered as an example of a Linux system for which the root user can t damage the system if you consider root to mean person accessing the GUI configuration system . But then it wouldn t be difficult to create a configuration program for a regular Linux system that allows the user to change some parts of the system configuration while making others unavailable. Besides there are many ways in which the Android configuration GUI permits the user to make the system mostly unusable (EG by disabling data access) or extremely expensive to operate (EG by forcing data roaming). So I don t think that Android is a good example of root being prevented from doing damage. Signing All Files Another idea that I saw advocated was to have the secure boot concept extended to all files. So you have a boot loader that loads a signed kernel which then loads only signed executables and then every interpreter (Perl, Python, etc) will also check for signatures on files that they run. This would be tricky with interpreters that are designed to run from standard input (most notably /bin/sh but also many other interpreters). Doing this would require changing many programs, I guess you would even have to change mount to check the signature on /etc/fstab etc. This would be an unreasonably large amount of work. Another possibility would be to change the kernel such that it checks file signatures and has restrictions on system calls such as open() and the exec() family of calls. In concept it would be possible to extend SE Linux or any other access control system to include access checks on which files need to be signed (some types such as etc_t and bin_t would need to be signed but others such as var_t wouldn t). Of course this would mean that no sysadmin work could be performed locally as all file changes would have to come from the signing system. I can imagine all sorts of theoretically interesting but practically useless ways of implementing this such as having the signing system disconnected from the Internet with USB flash devices used for one-way file transfer because you can t have the signing system available to the same attacks as the host system. The requirement to sign all files would reduce the use of such a system to a tiny fraction of the user-base. Which would then raise the question of why anyone would spend the effort on that task when there are so many other ways of improving security that involve less work and can be used by more people. Encrypted Root Filesystem One real benefit of a secure boot system is for systems using encrypted filesystems. It would be good to know that a hostile party hasn t replaced the kernel and initrd when you are asked for the password to unlock the root filesystem. This would be good for the case where a laptop is left in a hotel room or other place where a hostile party could access it. Another way of addressing the same problem is to boot from a USB device so that you can keep a small USB boot device with you when it s inconvenient to carry a large laptop (which works for me). Of course it s theoretically possible for the system BIOS to be replaced with something that trojans the boot process (EG runs the kernel in a virtual machine). But I expect that if someone who is capable of doing that gets access to my laptop then I m going to lose anyway. Conclusion The secure boot concept does seem to have some useful potential when the aim is to reboot the system and have it automatically apply security fixes in the early stages of the boot process. This could be used for Netbooks and phones. Of course such a process would have to reset some configuration settings to safe defaults, this means replacing files in /etc and some configuration files in the user s home directory. So such a reboot and upgrade procedure would either leave the possibility that files in /etc were still compromised or it would remove some configuration work and thus give the user an incentive to avoid applying the patch. Any system that tries to extend signature checks all the way would either be vulnerable to valid but hostile changes to system configuration (such as authenticating to a server run by a hostile party) or have extreme ease of use issues due to signing everything. Also a secure boot will only protect a vulnerable system between the time it is rebooted and the time it returns to full operation after the reboot. If the security flaw hasn t been fixed (which could be due to a 0-day exploit or an exploit for which the patch hasn t been applied) then the system could be cracked again. I don t think that a secure boot process offers real benefits to many users.

20 November 2011

Michael Casadevall: Possible GLX Bug in Ubuntu; feedback needed (affects Intel video cards 2D/3D acceleration)

So I've been recently screwing around with VirtualBox on a personal project, and I ran into an issue with not being able to enable 2D/3D acceleration on Ubuntu 11.10. After quite a bit of debugging and forum searching, the problem was that the NVIDIA GLX driver was being loaded instead of the standard MESA one, preventing any video acceleration from properly working on my Intel based video card.

I just recently reinstalled Kubuntu on this laptop, and since its a fairly stock install at the moment, I suspect that this is a general (K)ubuntu bug, and not something related to me screwing around with my system. In addition, since I switched to using Kubuntu full time, this is the first time I've seen transparency and other desktop effects, and system performance has improved dramatically. While I can't say for certian, I suspect that my system was also affected on its previous install. Part of the issue may be related to what packages are seeded per flavour, so this bug may only affect those who installed Kubuntu over say Xubuntu or Ubuntu; without more information, its impossible to say.

This is where you can help; if you are running any flavor of Ubuntu with an Intel based video card, you might be affected by this too.

Here's how to check; open a terminal, and type:

mcasadevall@daybreak:/var/log$ cat /var/log/Xorg.0.log

then find the section where the glx module is loaded. It looks something like this:

[236901.570] (II) LoadModule: "glx"
[236901.571] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[236901.578] (II) Module glx: vendor="X.Org Foundation"
[236901.578] compiled for 1.10.4, module version = 1.0.0
[236901.578] ABI class: X.Org Server Extension, version 5.0

(this is on a machine where the Intel acceleration is properly working).

If it says 'ATi' or 'NVIDIA', you've run into the same issue I have. So dear readers, I ask that if you've had any issue with graphic performance, gaming, or simple UI lag and have an Intel video card, please post a comment with you video card, what flavor of Ubuntu you have installed, and the glx section of Xorg.0.log. If I get a few reports that confirm this, I'll file a proper bug in Launchpad, and then work to get this fixed.

16 November 2011

Michael Casadevall: Touch-friend apps in Ubuntu/Debian?

I'm working on a personal pet project and wanted to get some feedback on the best apps to use in a touch only environment. I know that there are a few people who use Debian or Ubuntu on a tablet, and I was hoping to get suggestions on the best desktop environment and apps available. Please leave some comments with suggestions, and with a little luck, I'll have something to demo on this blog in a few weeks.

15 November 2011

Michael Casadevall: Secure Boot - its here and been here for quite awhile ...

There's been a lot of noise with Microsoft requiring Secure Boot for Windows 8 OEMs. For those of you unfamiliar with it, Secure Boot requires that the boot chain is signed, and this 'feature' must be enabled by default. Although I have been unable to find specific details, it appears that the chain of trust needs to extend from BIOS/UEFI all the way down to the kernel. Obviously, requiring a signed boot chain makes using FOSS platforms like Ubuntu or Debian an impossibility short of having the UEFI Platform Key and resigning the entire chain.

Steven Sinofsky's MSDN blog has a fairly good overview how it works. Canonical and Red Hat also have a good white paper on why secure boot is a serious problem for Linux distributions. Even if secure boot itself can be disabled, it *greatly* rises the bar for general end-users to successfully install Ubuntu on their machine. In addition, it is the responsibility of OEM and BIOS manufacturers to provide the option to disable it.

There already has been a long history of OEMs removing BIOS options or introducing DRM; for instance, locking out VTx on Sony laptops, or restricting laptops to only accept 'branded' wifi and 3G cards. Given this track record, can OEMs realistically be trusted to have this option available?

What most people don't realize is that secure boot itself is not a new concept; its simply part of the Trusted Computing initiative, and has been implemented on embedded platforms for many years. If you own any iPhone, or one of the vast majority of Android devices, you are using a device that either has the secure boot feature, or something very close. This especially painful in Android as Google's security system restricts users to a *very* limited shell and subset of utilities which can be used on non-rooted devices. WebOS, Maemo, and to my knowledge Meego give the end-user full unrestricted access to the boot chain and you can swap kernels and even the entire OS out if one was so motivated.

Although it is still an ongoing problem, several vendors such as HTC, Samsung, Motorola (kinda), and even Sony. In the Android community, having unlockable bootloaders has been a welcome middle ground in the traditionally restrictive and locked-down world of cellular devices.

While some may argue that such locks are necessary to protect consumers, it is perfectly feasible to create devices with unlocked bootloaders that are still secure. The Nook Color's is an Android powered eReader. It's stock firmware doesn't allow sideloaded applications or even access to a user shell via adb, but the BootROM on the device attempts to boot from the microSD card before eMMC, making it possible for enterprising users to easily modify the underlying OS (as well as making it physically impossible to brick the device due to a bad flash). Barnes and Nobles even sells a book on rooting the Nook Color; it was right next to the devices on display at the time. In addition, they've continued the tradition of easily modifiable devices with both the Nook Simple Touch, and the Nook Tablet.

One of the most impressive attributes is the possibility of running Ubuntu on it. The absolute poster child for this is the Toshiba AC100. For those of you unfamiliar, it ultralight netbook that shipped with Android 2.1/2.2 with easy access to the built in flash via the mini-USB port on the side of the device. Due to the valiant efforts of the AC100 community, Ubuntu was ported to this device, and became a supported platform with images available on cdimages.ubuntu.com. If you were an attendee at UDS, you likely saw several AC100s all running Ubuntu.

This brings me to the point that motivated me to write this post in the first place. One of the most impressive tablets I've seen to date is the ASUS eeePad Transformer, an Android tablet with fully dockable keyboard. I have one of these devices, and its one of the most impressive and usable Android tablets I own. Sadly, such a powerful device was hobbled from its true potential due to ASUS's decision to ship the device with a locked and encrypted bootloader. Surprisingly, the Secure Boot Key (SBK) was acquired and released to the wild, making it possible to reflash the device. Sadly, even with the SBK, the device's bootloader is still extremely hobbled compared to the AC100 making flashing a slow and difficult process.

In response, ASUS refreshed the eeePad's hardware to the new B70 SKU, which has a new Secure Boot. Despite this, a root exploit was recently found to allow people to circumvent these restrictions and install customs ROMs. It is however only a manor of time before ASUS responds and releases a new update that fixes this bug.

Steven Barker (lilstevie) on xda-developers successfully created a port of Ubuntu to the Transformer. Currently, installing Ubuntu on the Transformer requires nvflash access, so its not possible to use his image on the newly liberated B70 devices. I am certain that a new method of installing via an update.zip will be developed for those of us with hobbled devices.

It is a showcase of what is possible when you have open hardware, and also proves one indisputable point: any 'trusted boot' or DRM scheme can and will be defeated; at best you piss off your userbase, and at worst, you force users to exploit bugs to gain control of their device. As it is impossible to reflash these devices from the bootloader, a failed kernel flash WILL brick these devices, increasing warranty and support costs as users try to return their now broken devices.

In closing, while there have been some victories in ongoing war of open hardware vs. trusted comptuing, the road ahead still remains very murky. Victories in the mobile market have shown that there is a market for open devices. Google's own Nexus One was sold as a developers phone and as a way to encourage manufacturers to raise the bar. It sold well enough to recoup its development costs.. While there is no official statements, the Nokia N900 is suspected to have broken all sales expectations, backed up by the fact that Angry Birds sold extremely well on the Ovi Store for the N900.

From the article in question:
What reaction have you had in terms of sales and customer feedback?

Angry Birds had already been launched on App Store before it came out on Ovi Store, and it had a great review average from iPhone reviewers and users alike, so we expected a good reception from N900 users as well.

Even so, we were quite surprised by just how the N900 community immediately took the game to heart. The game obviously made many people very happy, and that is really the greatest achievement that anyone who creates entertainment for a living can hope for. Well, maybe the greatest achievement is huge bundles of cash, but making people happy comes a close second.

In the first week that Angry Birds has been on the Ovi Store, it has been downloaded almost as many times as the iPhone version in six weeks. Given that most N900 users have not even used Ovi Store yet, we are confident that there will be many more downloads in the months to come, and are sure that the N900 version will be very profitable.

That being said, with Microsoft pushing secure boot and trusted computing down everyone's throats with Windows 8, it is hard to say what the future might hold for those of us who want to own our devices.

1 July 2011

Michael Casadevall: Pandaboard Netboot Images Now Available

As I mentioned in my previous blog post, OMAP4 netboot images were available, but non-functional. I'm pleased to announce that these bugs have now been resolved and it is possible to have a functional install on OMAP4. This also has the added advantage of allowing one special partitioning layouts such as RAID, LVM, or simply having a non-SD based root device. The images are available here: http://ports.ubuntu.com/ubuntu-ports/dists/oneiric/main/installer-armel/current/images/omap4/netboot/

To use, simply dd boot.img-serial or boot.img-fb to an SD card, pop it in, and run, and the installer will pop up.

There is still a known bug that partman will not properly create the necessary boot partition. During the partitioning step, you must select manual partitioning, then create a 72 MiB FAT32 partition, with no mount point, and the Bootable flag must be set to 'on'. This partition must be the first partition on the device. flash-kernel-installer will be able to find the partition on its own.

30 June 2011

Michael Casadevall: On porting the installer (Part 1)...

So as Alpha 2 approaches, I find myself working towards porting the alternate installer/d-i to the pandaboard to support the netboot installer. There's not a lot of documentation that describes the internals of d-i, nor what bits are platform specific.

This is especially true when working towards creating a new subarchitecture since lots of little places have to be touched, kernels usually have to be tweaked, and all other sorts of odds and ins. This post isn't a comprehensive guide to what's necessary, but just little tidbits of what I did, just some random odds and ends.

The first step of any enablement is to have something you can run and boot. The netboot images, as well as the alternate kernel and ramdisk are built out of the debian-installer package. In the debian-installer package, several config files for driving the process are located in build/config/$arch/$subarch. For omap4, we have the following files:

boot/arm/generate-partitioned-filesystem
build/config/armel.cfg
build/config/armel/omap4.cfg
build/config/armel/omap4/cdrom.cfg
build/config/armel/omap4/netboot.cfg

boot/arm/generate-partitioned-filesystem is a shell script that takes a VFAT blob, and spits out a proper MBR and partition table.

build/config/armel.cfg simply is a list subarchitectures to build, and some sane-ish kernel defaults for armel.

build/config/armel/omap4.cfg is also a simple config file which specifies the type of images we're building, and the kernel to use in d-i. This file looks like this:

MEDIUM_SUPPORTED = netboot cdrom

# The version of the kernel to use.
KERNELVERSION := 2.6.38-1309-omap4
# we use non-versioned filenames in the omap kernel udeb
KERNELNAME = vmlinuz
VERSIONED_SYSTEM_MAP =

As a point of clarification, 'cdrom' is a bit of a misdemeanor; it refers to the alternate installer kernel and ramdisk used by alternate images, and not the type of media. Other types of images exist such as 'floppy' and 'hd-install', but these are specialized images, and out of scope for this blog post.

Each file in build/config/armel/omap4/* is a makefile thats called in turn for each image that created. The most interesting of this is the netboot.cfg

MEDIA_TYPE = netboot image
SUBARCH = omap4
TARGET = $(TEMP_INITRD) $(TEMP_KERNEL) omap4
EXTRANAME = $(MEDIUM)
INITRD_FS = initramfs

MANIFEST-INITRD = "netboot initrd"
MANIFEST-KERNEL = "kernel image to netboot"
INSTALL_PATH = $(SOME_DEST)/$(EXTRANAME)

omap4:
# Make sure our build envrionment is clean
rm -rf $(INSTALL_PATH)
mkdir -p $(INSTALL_PATH)

# Generate uImage/uInitrd
mkimage -A arm -O linux -T kernel -C none -a 0x80008000 -e 0x80008000 -n "Ubuntu kernel" -d $(TEMP_KERNEL) $(INSTALL_PATH)/uImage
mkimage -A arm -O linux -T ramdisk -C none -a 0x0 -e 0x0 -n "debian-installer ramdisk" -d $(TEMP_INITRD) $(INSTALL_PATH)/uInitrd

# Generate boot.scrs
mkimage -A arm -T script -C none -n "Ubuntu boot script (serial)" -d boot/arm/boot.script-omap4-serial $(INSTALL_PATH)/boot.scr-serial
mkimage -A arm -T script -C none -n "Ubuntu boot script (framebuffer)" -d boot/arm/boot.script-omap4-fb $(INSTALL_PATH)/boot.scr-fb

# Create DD'able filesystems
mkdosfs -C $(INSTALL_PATH)/boot.img-fat-serial 10240
mcopy -i $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/uImage ::uImage
mcopy -i $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/uInitrd ::uInitrd
mcopy -i $(INSTALL_PATH)/boot.img-fat-serial /usr/lib/x-loader/omap4430panda/MLO ::MLO
mcopy -i $(INSTALL_PATH)/boot.img-fat-serial /usr/lib/u-boot/omap4_panda/u-boot.bin ::u-boot.bin
cp $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/boot.img-fat-fb
mcopy -i $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/boot.scr-serial ::boot.scr
mcopy -i $(INSTALL_PATH)/boot.img-fat-fb $(INSTALL_PATH)/boot.scr-fb ::boot.scr
boot/arm/generate-partitioned-filesystem $(INSTALL_PATH)/boot.img-fat-fb $(INSTALL_PATH)/boot.img-fb
boot/arm/generate-partitioned-filesystem $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/boot.img-serial

# Generate manifests
update-manifest $(INSTALL_PATH)/uImage "Linux kernel for OMAP Boards"
update-manifest $(INSTALL_PATH)/uInitrd "initrd for OMAP Boards"
update-manifest $(INSTALL_PATH)/boot.scr-fb "Boot script for booting OMAP netinstall initrd and kernel from SD card. Uses framebuffer display"
update-manifest $(INSTALL_PATH)/boot.scr-serial "Boot script for booting OMAP netinstall initrd and kernel from SD card. Uses serial output"
update-manifest $(INSTALL_PATH)/boot.img-serial "Boot image for booting OMAP netinstall. Uses serial output"
update-manifest $(INSTALL_PATH)/boot.img-fb "Boot image for booting OMAP netinstall. Uses framebuffer output"

The vast majority of this is fairly straightforward. TARGET represents the targets called by make. There are tasks for creating a vmlinuz and initrd that must be included. The omap4 target then handles specialized handling for the omap4/netboot image.

omap4 requires a VFAT boot partition on the SD card with a proper filesystem and MBR. The contents of the filesystem are straightforward:

MLO - also known as x-loader, a first stage bootloader
u-boot.bin - u-boot binary, second stage bootloader, used to book the kernel
uImage - linux kernel with special uboot header (created with mkimage)
uInitrd - d-i ramdisk with special uboot header
boot.scr - special boot script for u-boot for commands to execute at startup.

MLO and u-boot.bin are copied in from x-loader-omap4-panda and u-boot-linaro-omap4-panda which are listed as build-deps in the control file for d-i. boot.scr is generated from a plain-text file:

fatload mmc 0:1 0x80000000 uImage
fatload mmc 0:1 0x81600000 uInitrd
setenv bootargs vram=32M mem=456M@0x80000000 mem=512M@0xA0000000 fixrtc quiet debian-installer/framebuffer=false console=ttyO2,115200n8
bootm 0x80000000 0x81600000

These are u-boot commands that simply load the uImage/uInitrd into RAM, set the command line, and then boot into it.

When porting the installer, it is mostly a task of putting your subarchitecture name in the right places, then adding the necessary logic in places to spit out an image that boots. This provides a sane base to start working on porting other bits of the installer. When d-i is uploaded to Launchpad, these files end up in http://ports.ubuntu.com/ubuntu-ports/dists/oneiric/main/installer-armel/current/images/

My next blog post will go a bit into udebs, and understanding how d-i does architecture detection, and introducing flash-kernel.

11 November 2010

Michael Casadevall: On Achieving Goals ...

*blows the dust off his blog*

It's been quite awhile since I last wrote anything in this thing, so I guess its a good time as any, off the tails of UDS-N, to finally sit down and write something down. Since my last blog posting in May, I've been travelling around the world, attending conferences, and trying to represent both Ubuntu and Debian itself, and Canonical in all my travels. I was recently posted to China for a full month where I lived and worked out of an office, made friends with fellow Ubuntu users, and tech enthusiasts, had a wonderful time meeting with people with the Beijing Linux User's Group, and worked hard within my team at Canonical, and the legions of Ubuntu Developers to help make one of the best releases we've ever had.

As I was writing specs and drafting work items, I came to the point that I was reflecting on my own personal goals and growth in life. Two years and change ago, I was a struggling junior at Rochester Institute of Technology, a little less than two years ago, I started working full time with Canonical, and this year, I've gone and traveled to many places I've only dreamed about; Anchorage & Barrow, Alaska; Tampere, Finland; Prague, Czech Republic; and Brussels, Belgium, just to name a few. I packed up and lived in China for a month (an amazing experience, and I look forward to going back and visiting again sometime in the near future). Had you told the me of two years ago what I would be doing now, I'd probably think you were smoking something good.

One thing I've discovered in my life is that if you want to do something, you need to get out there and just ****ing do it. This may seem simple, but I think of the dreams a lot of people have that never seem to come to fruition. When I have an opportunity, I take it; going to Alaska was a dream I harbored for many years, especially entering the arctic circle, and heading to Barrow. That entire trip was booked on roughly four days notice, and the side trip to Barrow was planned the day before it actually happen. I don't regret any of it; it was one of the best things I ever done.

It brings me to what I consider the flux of this blog posting. For those who know me, I've had a goal of visiting every state within the United States. At this time last year, this map looked like this

As of two weeks ago today, this map looked like this:



Now, by the end of the day today, it will look like this:



The point I'm trying to make is if you want to do something, do it the first chance you get, or just don't do it; I say this because you never know when you will be able to do it again.

As for completing a life goal, well, it feels pretty amazing :-). I may write about that in a future blog posting ...

17 May 2010

Michael Casadevall: Mailing Lists and Newsreaders ....

So for awhile, I've been questioning on why very few FOSS programs use newsgroups (or USENET on moderated groups) as their primary means of communication between developers. I personally find email to be a clunky way dealing with mass-mailings, and while switching to alpine and mutt have helped, I feel newsgroups would work better in place of mailing lists.

In this spirit, I'm going to try to replace reading most of my public facing mailing lists with Gmane and see if my opinion actually holds true; if newsgroups are superior to bog-standard mailing lists, and post back in some time. I'm currently looking at various GUI and console based newsreaders to see which works best to meet my needs.

I also need to blow the dust off the memories and remember how to setup leafnode so I can mimic OfflineIMAP. Any suggestions are welcome.

5 February 2010

Michael Casadevall: 1984

It's been awhile since I last posted, but I felt the need to do so after my recent flight through ATL. While I was waiting for my connection the loudspeakers announced that we were at threat level "Orange" and that we, the people, had to be on guard for threats to our country.



Does this remind anyone of anything? If your answer is 1984, then you win. When did we get to this point that our government is feeling more and more like the Party; I am just waiting for the day when we have telescreens constantly watching for terrorism and other threats ...



I hope this day never comes but given the path we have been on since 9/11, I fear for the future ...

27 May 2009

Michael Casadevall: UDS Day 3 & KDE involvement

Hola all,
So this is my first blog post since UDS started (although I have been doing some work microblogging this session around (I find that if I treat it like multicast IRC using gwibber, it suddenly makes more sense to me). We're now three days into UDS, and working hard on defining what Ubuntu karmic will be, and I must say I am excited with the way things are shaping up to UNR discussions, to the Android Execution Environment (and if anyone has any questions on it, please direct those emails to Michael Frey and Debbie Beliveau as they are the people behind it, despite Slashdot's reports on the subject).

There are loads going on, including Moblin (which you'll see this afternoon), Android (same), ports kernel handling, and loads of other cool things come up. I'll comment on some of the more interesting things as time goes on.

In other news, as of late last night, I'm officially an upstream KDE developer with SVN commit writes. I've written an email detailing my plans for working on KDE to kde-core-deveonl, where is it is happily stuck in a moderation queue, so hopefully those involved in upstream KDE development will soon learn of my intentions :-).

I'll write more later,
Michael

Michael Casadevall: Jaunty Retrospect

Guess its my time to post something about my feelings on Jaunty.

The Good:
* armel successfully birthed
* powerpc well on the way to be well maintained
- Kernel and installer work mostly done by TheMuso (thanks :-))
- Image testing by the people of #ubuntu-ps3, myself, and TheMuso
- powerpc, and powerpc+ps3 both in the release annoucements for Kubuntu and Xubuntu :-)
* Kubuntu upgraded to KDE 4.2
* Xubuntu upgraded to Xfce 4.6
* PowerPC FTBFS rate in main very low. ia64, and sparc looking more improved.

The Bad:
* SPARC, ia64, and HPPA remain fairly foobar w.r.t. to the installer and kernel

The Ugly:
* The drama over notifications and update-manager

All and all though, I think its been a fairly good cycle. Looking forward to karmic.

16 January 2009

Michael Casadevall: Re-enginneering my network ...

As I move to having more and more machines on my internal LAN, I felt the time had finally come that I sit down, and rebuild my network to take advantage of things such as gigabyte networking, LDAP, single-user sign on, and so forth. I'm doing partially for fun, and partially because its an interesting experiment to see how Linux from an IS environment compares to a Windows 200x IS environment (one of my former jobs was a 2000/XP/2003/Vista sysadmin position).

So, here's my current network setup
blacksteel <- *wireless* ---------------------------------------------- cerberus <-> Internet
/
dawn <------ *wired*-----------------------------------------------------
/
360 <---- *wired* -----------------------------------------------------

Online machines:
cerberus - WRT51GS
backsteel - My laptop
dawn - Development machine
360 - Xbox 360, used to play media from blacksteel

Offline machines (aka, machines I have, but haven't fired up since moving:
helios (PowerMac G4)
apollo (old Dell P3)
junker (RS/6000 rescued from the dumpster, might be dead)
alexandria (NSLU2; gave up its plug for dawn)
coldfusion (Coldfire Board, might be dead; ethernet controller is faulty, but might be able to use a USB based one to breath some life into it; can't autoreboot due to built in bootloader not supporting it; and no JTAG to sanely change the default bootloader).
siren (old MacBook Pro, has a dead internal HDD, but runs fine from an external hard drive. Was my Debian test box until its HDD went to dawn)
exodius - second WRT54GS used to be part of a WDS bridge.
unnamed dev box (not here yet, but likely soon).

Of all these machines, only apollo has a wireless card which ATM is non-functional. In addition, the wired bits of my network are 100Mbps, with a g based wireless hotspot (WPA secured). Futhermore, blacksteel, helios, and siren have gigabyte ethernet. apollo has 100MBps ethernet card. alexandria and dawn have 10MBps, which is painful, especially for NFS root.

I'll drop another 1Gbps NIC into apollo, replacing its wireless card, and give dawn, alexandria, and maybe coldfusion USB based NICs once I get around to resurrecting systems (alexandria and coldfusion don't have hard drives at the moment)

What I would like to do is use an Linux-based router and replace Cerberus. Helios has two gigabyte NICs, so it will take up this duty, as well as provide DHCPv4, and radvd (for IPv6) for the internal network. It's an old computer, and has an onboard model, and its position in my apartment will be close to a phone jack; maybe I'll set it up so I can dial in from outside the LAN in case something goes down (although my phones here are VoIP based so I dunno how useful that's going to be :-)).

Another box (I might task this to apollo, or helios) will run LDAP and NFS services, providing both a netboot based installation with preseed for fast re-installation, and NFS home folders for all machines except blacksteel (unless someone knows a great solution for having a laptop sync NFS and local home folders. helios will run mail, news, and any other untrusted net facing services, with everything else shielded behind it. All machines will run IPv4 and 6.

Anyway, this is the start of my plan in a nutshell, and I intend to continue discussion as I slowly build and implement this updated setup. Wish me luck :-).

2 January 2009

Michael Casadevall: Notes from Underground, Part 1

For those following d-devel, you may notice that I've recently been working on improving one of the cornerstones of Debian infrastructure; the Debian Archive Kit, or dak for short. Most DDs and DMs don't notice dak exists expect when trying to determine why their latest upload was rejected, and then yelling at the powers that be. I'm here to shead some light on this mythicial beast.

First off, a quick history lesson:

dak (also known as projectb) is a replacement for Debian's original archive software, known simply as dinstall. dinstall itself was a fairly large perl script that does what dak process-unchecked/process-accepted does today. James Troup did a fairly decent summary of dinstall, and its issues

James Troup's README.new-incoming (from dak's git repo):

The old system:
---------------
o incoming was a world writable directory

o incoming was available to everyone through http://incoming.debian.org/

o incoming was processed once a day by dinstall

o uploads in incoming had to have been there > 24 hours before they
were REJECTed. If they were processed before that and had
problems they were SKIPped (with no notification to the maintainer
and/or uploader).

dak's first commits were in 2000, and rolled out onto ftp-master.d.o sometime in 2001 or 2002 (I can't find an exact date for this). Since then, dak is also used on security.d.o, and on backports.org (fun fact for bpo people; the dak installation there is now up to date, and tracking git's tip).

So now that you know the history lesson, what specificially does dak do is the next question. Simply put, dak is the glue that binds the rest of the Debian's backends together; both britney and wanna-build/buildd depend on it. It handles management of uploads to the archive, handles stable release updates, as so forth. It is also the only Debian archive software that uses an actual database backend, and scales fairly well handling over 10,000 packages, and 12 architectures. Unfortunately, there are also a lot of issues with dak as it stands.

Sections of the code base have bitrotted over the years; legacy and legacy-mixed support have died, the import-archive function is shot (more so now than ever, see below), the test suite is non-functional (never a good sign), the docs are out of date, and in many places non-existant, doing a release (both point and full) requires editing the database and so forth.

In addition, dak, while written in python, is written in a fairly procedural style, and and some very ugly code in some places. For instance, the original Debian Maintainer code was handled by having the uid's in the database prefixed by dm: vs having a flag somewhere, and had some hardcoded variables like checking for "unstable", as well as quite a few bugs which caused interesting behavior when uploading to a non-unstable suite such as experimental or one of the proposed queues. (for those of curious, I recommend checking the dak git tree to see what the old DM code looked like, and then aside from the design, find the two major bugs which caused a lot of the weirdness with DMs). It should be stated that the last merge from redid the DM code and design sanely using the new update framework.

These issues have lead to the genesis of the dak v2 project, which is an attempt to replace dak with a module, rewritten from the ground up to be more secure and modular, although its not gotten very far as of writing. I personally don't believe that the current iteration of dak is so bad as scrapping and rewritting is necessary. Instead, I've been working to implement v2 features in dak by aggressive refactoring and cleanup, with the hope of negating the need for a rewrite.

So now thats out of the way, I bet you probably are interested in my .plan for dak. Well, lets go over I've implemented so far.

* An update database framework for dak, which will allow for easy database upgrade and migration, vs the "does it work yet?" approach to applying schema updates. Simply type dak update-db, and your done!

* 822 formatted output for queues (http://ftp-master.debian.org/new.822); this information is now used on DDPO pages

* Rewriting DM management code to have more of a brain than the previous implementation.

What's next on the TODO list

* Content file generation from the database (part of removal of apt-ftparchive, but thats another blog post ;-)).

Oh, as a side note to my current readers, my blog has changed names to "Notes from Underground", after one of my favorite novels, and futher in reference to exploring the mysterious underground that is Debian's backend code. We're also now on Planet Debian :-).

30 December 2008

Joerg Jaspert: Fuck it! What could possibly go wrong?

Time for another ftpmaster blog. It s not like we stopped doing things. :) It seems we, somehow, got a pretty active coder (yay), Michael Casadevall. In the last few days I merged multiple code changes from him. The first was a different output format for our NEW queue overview page. It is now also available in 822 Format for all those people that want to use the data in their tools. Like QA for example. Today got a second change touching this, we now not only list NEW/BYHAND in there, but also our incoming (accepted) and Proposed-Updates queue. There is also a patch waiting to fix up the pretty weird code thats around the DM stuff. Will fix some of the many bugs and weirdnesses in the current implementation. And it seems he is right now trying to get rid of apt-ftparchive, the beast that makes our dinstall run take so long. Oh we will be so happy when we got rid of it. :) Also, we do have two new people as trainees for ftpteam. If that works out, we will have yet another 2 new members in our team. Yay.

16 October 2008

Petr Rockai: adept 3.0 beta 4

I have released fourth beta of Adept today. For the unaware, Adept is an APT front-end for KDE: it lets you install, remove, upgrade software and such your Debian and Kubuntu boxes (and maybe on some other Debian derivatives, too). Changes since Beta 2 Since I have done beta 3 somewhat in a hurry, I have regressed the installer component badly — to point of complete unusability. However un-sound software engineering practice, the deep freeze for Kubuntu Intrepid hits today, so I have released beta 4 in rapid succession. Other than the regression fix, I have mostly reworked the paging implementation in the installer — however, the report of complete installer rewrite (hi Jonathan) is somewhat exaggerated. Sadly, the KPageWidget we have been using before turns out to be completely unsuitable for the job, so a quick rewrite probably fixed more issues than it caused (and this seems to be confirmed by early testing feedback). Moreover, this beta got a bunch of improvements to the search, including package name hits, a set of cosmetic improvements (button icons, nicer sidebar, etc.) and a button to run software-properties-kde from the Sources tab (works on Intrepid, but sadly breaks on Debian for now… will fix later. Need time, need sleep.) Where to get I have prepared binary packages. These should be now part of Debian Sid (unstable) and also of Kubuntu Intrepid. You should be able to get the new version from your distribution:
apt-get install adept
Again, as with beta 2, there is no Hardy backport, since my time is tight. If anyone is willing to do a backport please drop me a note: I will gladly publish installation instructions. Heroes of Beta 4 Jobs for Beta 5, RC and Final Mostly just testing and reporting bugs. I currently plan a beta 5 with a few fixes that have started accumulating since beta 4 already (mostly one-liner fixes with minimal risk of regressions). After that, an RC is in order and a final release (finally…). Notifier is not included. Kubuntu has an independent sort of notifier now, I believe. Maybe I’ll find time to reconcile it in 3.1 or so. Moreover, I assume that a beta 4 or a very minorly patched beta 4 will become part of Kubuntu Intrepid — there should be no major roadblocks left, so I hope everything will bode well for Intrepid. I assume that whatever becomes 3.0 final will be eventually included in Intrepid updates. And then, plans should be set forth for 3.1… Known issues The about dialog still says beta 2. Bummer. Please try to note in your reports that this is actually a beta 4 that you are using. I’ll make sure beta 5 fixes that — but it might be another week till then, or so. The Intrepid version should have that fixed, thanks to Jonathan Thomas (again).