Search Results: "fabbione"

2 December 2007

Brandon Holtsclaw: ugh

fabbione why oh why did you upload openssl097 UNPATCHED when there were known security issues with this version, with the fix even linked to in the bug? This software was removed from Gutsy for a reason, if its going to be added back it should ATLEASTE be patched, as it stands right now anyone who installs vmware-server via Canonical’s Partner Repository is remote exploitable and it was known prior to the upload, is this a case of $$ from VMware means more than security? I’m sorry but this is simply not acceptable from Canonical as far as I’m concerned. It isnt a case where VMware is distributing it, its us, or atleaste Canonical distributing it to STABLE releases. At this point a patch to fix the issue will simply not be enough, I want to know why this happened. And whats going to be done to ensure the Distribution I spend many hours for volunteering isnt going to allow this to happen again. I Love Ubuntu and hate to see things like this happen, lets ensure they dont. /* annoyed */

21 July 2007

Fabio M. Di Nitto: SUN's LDOM hits 2.6.23

Yes and really proud of it.. David did an awesome work fixing the tons of bugs I have been reporting in past couple of weeks with my super stress testing of the implementation. I have to admit that he was also > this close to scramble the "black choppers" towards my house but that's usually a good sign when I can piss him off this much :) ..because we did beat Xen ;) David implementation of LDOM was merged upstream in about 2 months development. Xen took only 3/4 years? :P Anyway just kidding.. don't take it personal. Xen is a full mature implementation and LDOM still lack functionalities and features but we are getting there very very quickly. For now I am going in vacation for a few weeks with my son and plan to stay away as far as possible from any form of electronic equipment and give David some slack to relax too. Once back I plan to hack my LDOM workarounds to handle better some "features" of the vds partiton checker and complete the tests with exporting single partitions. Thanks to Barton for shipping me a couple of extra disks for my Niagara, I will also be able to test other combos with Nevada and different guest setups. Stay tuned!

18 July 2007

Fabio M. Di Nitto: More LDOM love

I finally decided to publish the few hacks I am using to run Linux on SUN's LDOM.
They are not pretty but they do the job done for me.
I take 0 responsibility for them so use them at your own risk.
Use this HOWTO only if you are dealing to mess with your system. Do *NOT* use for production systems if you do not understand what you are doing Step 1: read all the possible LDOM documentation that SUN made available for you. Step 2: understand that this is all experimental software. Step 3: repeat 1 and 2 a few times. LDOM server in Solaris allows to export 3 kind of block devices (file, partition/slice, entire disk) to the guest and those are seen as virtual disks.
The guest cannot really feel the difference but it is important to understand the way in which the LDOM software plays with them.
I did *not* test the export for partition/slice and my hacks do not know how to cope with it. You have been warned. The problem: LDOM virtual disk daemon mangles partition tables In an attempt to make sure that the partition tables exported to the guests are sane, the daemon ends up trashing valid ones because the check it forces to the devices is too strict and it behaves differently if the device is a file, a partition/slice or an entire disk.
The ldm bind operation is the dangerous one to perform and that can kill your partition table for good.
How ldm bind operation work: - for entire disks: * check device partition table
* fix device partition table if check fails
* lock device
* export device to guest
* store status of device once it is validated and do not check it ever again
- for files: * check device partition table
* fix device partition table if check fails
* export device to guest
Remember that I did *not* test export of a slice and that my hack does not take it into account. You have been warned twice. Limitations of the workaround First of all I assume that you installed LDOM 1.0 in the standard path (/opt/SUNWldm/) otherwise you might need to change the script manually to match an alternate path.
Each guest is using the same kind of devices. The workaround does not yet how to cope with exports of files and partitions to the same guest.
You know how to use dd and some other basic tools.
If you start playing with bind/unbind operations manually and things break, you keep all pieces.
How to This list of operation sounds extremely complex, but it is not and it is required only at the first install time and possibly to update your partition table backups if you decide to change them. The changes to the ldom init script will take to restore the last known to work partion table on each reboot. NOTE: if you fail to follow these instruction and do not use valid configuration entries or backup partitions, LDOM might not start anymore or even core dump. TAKE EXTREME CARE (and learn how to reboot into factory-defaults at SC/ALOM)

1 - get the few files from here. This html file is there too for your convenience.
2 - ldmd_start is the newly modified init script for LDOM. It should be placed in /opt/SUNWldm/bin and made executable. Note that if there is no configuration file or the configuration file is empty nothing will happen and you will be running the exact same script as the original. ldmd_start.orig and .diff are there for your convenience to check that I did not add a root kit ;).
3 - ldom.linux is an example configuration file that you want to place in /etc.
4 - mkdir /.ldom.linux , we will use this directory to store the backup partition tables.
5 - decide what kind of device you want to export.
Follow steps 6 and 7 if you decided to use a full disk as device and skip to 8 if you are using a file 6 - Use the format command from Solaris and make sure to label and format the disk.
7 - Take a copy of the clean partition table.
Example: dd if=/path/to/device of=/.ldom.linux/$guestname-$(basename /path/to/device).format count=1 bs=512
So let say that your guest is called foo1 and you are exporting /dev/rdsk/c1t1d0s2 you will issue the following command:
dd if=/dev/rdsk/c1t1d0s2 of=/.ldom.linux/foo1-c1t1d0s2.format count=1 bs=512
Continue from here if you are exporting a file 8 - Configure your LDOM guest following the LDOM administration guide.
9 - Netboot and install Linux in the guest.
10 - Before rebooting into the installation, halt the guest and unbind it. The unbind is required to unlock the exported device for read.
11 - Take a copy of the partition table that has been created by Linux.
Example: dd if=/path/to/device of=/.ldom.linux/$guestname-$(basename /path/to/device).backup
Following the above example: dd if=/dev/rdsk/c1t1d0s2 of=/.ldom.linux/foo1-c1t1d0s2.backup count=1 bs=512
or
dd if=/path/to/file of=/.ldom.linux/foo1-file.backup count=1 bs=512
12 - Edit the configuration file in /etc that you installed at step 3.
Each line contains:
guest_name disk1 disk2 ...
NOTE: you do not need to add entries for Solaris guests. Only Linux guests are required! Empty lines are skipped, no comments are allowed in the file.
Follow steps 13 to 19 if you are exporting a disk 13 - Restore Solaris partition table:
Example: dd conv=notrunc if=/.ldom.linux/$guest-$(basename /path/to/device).format of=/path/to/device bs=512 count=1
14 - Bind the guest
15 - Unbind the guest again. HINT: this is the most important step here because LDOM disk server now has validated the partition table and it will not check it again.
16 - Restore Linux partition table:
Example: dd conv=notrunc if=/.ldom.linux/$guest-$(basename /path/to/device).backup of=/path/to/device bs=512 count=1
17 - Bind the guest again.
18 - Start the guest.
19 - Enjoy.
Follow steps 20 to 23 if you are using a file 20 - Bind the guest
21 - Restore Linux partition table:
Example: dd conv=notrunc if=/.ldom.linux/$guest-$(basename /path/to/file).backup of=/path/to/file bs=512 count=1
22 - Start the guest.
23 - Enjoy.
This said, the script will allow you to reboot the controller without having to worry about Linux guests and it should ensure that LDOM will always start properly. LDOM stores the status of a guests in the Machine Description and we will use this status to restore the partition tables too. If the guest is in either bind or active status, we will take proper actions to stop/unbind/restore/bind/start. If the guest is in unbind state, we will do nothing because we do not know why is in that state. The script also is not error prone. Failing to stop or unbind the domain, to perform restore partition table operation, might cause LDOM to abort and can require manual fixing but the above information should be enough for any experienced user to recover. Have fun

12 July 2007

Fabio M. Di Nitto: First SUN's LDOM install from archive: a full success!

Yet another master piece of work in cooperation with the Ubuntu kernel team and Super Davem. We finally got all userland and kernel bits and pieces into the archive to install Ubuntu Gusty in an LDOM guest. The installer works smoothly (except a bug i found in the partitioner and in software installar that are not sparc specific) and it is pretty fast too. Only netboot/netinstall is available at the moment but that's plenty compared to the knowledge people need to run this kind of infrastructure in it's early release stages. LDOM is not considered yet very stable and there are a bunch of workarounds required in Solaris to run Linux properly. I will post them sometime soon when I feel more confident that they will not break more than what they fix. It is really big satisfaction to see so many pieces falling together nicely in such short time. Well done everybody.

28 June 2007

Fabio M. Di Nitto: Linux running on SUN's LDOM!

Guys.. meet Super Davem We did it again. After another *really* long session of super-tag-team play with Davem coding as hell and me testing like crazy we finally managed to do a full install of Linux (based on Debian/Ubuntu installer) on SUN's LDOM as guest host (similar to domU concept of Xen). There is still a lot of work to do and major problems to solve before we will able to call this "stable" for the generic user. David's blog has detailed instructions on how to install and a complete list of the issues. If you plan to try it, make sure to read David's blog! A net-boot gutsy installer image is available here and net-boot kernel here. The code published by David will hit the gutsy kernel as soon as we are satisfied with the overall situation. Most of the userland bits required to install are already in the archive and others will land soon after Tribe-2 release. STAY TUNED!

2 April 2007

Fabio M. Di Nitto: Things of life

I know we are all geeks to the bones, that we start shacking while opening the package of our last super-techy toy, but nothing can give you enough satisfaction as watching your son laying on his stomach, sticking his "bottom" up in the air and start crawling around for his very first time. That made my day! :)

22 March 2007

Fabio M. Di Nitto: PUMP UP THE VOLUME!

Two of these toys just landed on my laps:

QLogic Fibre Channel HBA Driver
qla2xxx 0000:03:04.0: Found an ISP2422, irq 69, iobase 0xf6000000
qla2xxx 0000:03:04.0: Configuring PCI space...
qla2xxx 0000:03:04.0: Configure NVRAM parameters...
qla2xxx 0000:03:04.0: Verifying loaded RISC code...
qla2xxx 0000:03:04.0: Allocated (1061 KB) for firmware dump...
qla2xxx 0000:03:04.0: Waiting for LIP to complete...
qla2xxx 0000:03:04.0: Cable is unplugged...
scsi3 : qla2xxx
qla2xxx 0000:03:04.0:
QLogic Fibre Channel HBA Driver: 8.01.03-k
QLogic QLA2460 - PCI-X 2.0 to 4Gb FC, Single Channel
ISP2422: PCI (66 MHz) @ 0000:03:04.0 hdma-, host#=3, fw=4.00.27 [IP]

more FC-HBA! more fun! My Niagara T2000 has now 4 (3 PCI-E + 1 PCI-X) FC-HBA controllers topping what my ancient SAN can really do considering the age and the few parts branded Digital, but best of all is the above dmesg coming from a J5000 pa-risc machine. The Qlogic controller was the first one to actually pass the POST (opposite to the Emulex that hangs everything hard). One cluster, 5 different architectures, 7 machines. This project is becoming interesting to stress portability of things like OCFS2 or GFS2 and their user land tools. Sometimes i wish to have a few pcmcia -> FC-HBA to plug a ppc laptop and an amiga 1200 m86k :)

21 March 2007

Fabio M. Di Nitto: When your upstream rocks!

Working on cluster testing is not easy and debugging can be extremely more complicated that you can think of given the distributed nature of the software you are running. So RedHat Cluster guys (Patrick and Lon) and I have been sitting together on IRC for the past couple of days trying to figure out what the hell was wrong with this bug that did block services from being switched from one node on the cluster to the other that is sort of vital functionality for a cluster itself. Everything turned up to be a kernel race condition in the DLM (Distributed Lock Manager) that we were able to workaround in userland by changing the way the resource manager tells the kernel (via libdlm2) to de-allocate the lock_space with a one liner. So what's the point? Simple.. these guys could have just turned me down and send me away since we were testing on Ubuntu, but they didn't. They have been of great help (nevermind that you can really have a good time working with them) and extremely responsive. They really rock hard. Clearly.. get to know your upstream! it's vital for your activity as Ubuntu Developer. Get to know what you are working on! Packaging is not just a matter of creating a debian/ dir in your source to build the code and ship a .deb. You have to know what's in that code and how things work. Upstream can always help you, but you need to be helpful to upstream too and let them understand that you are not just another packager for yet another distribution.

18 March 2007

Fabio M. Di Nitto: Dear Hobbsee..

congratulation! you finally motivated me enough to check if my blog still works.. no wonder after over 15 months from the last entry...

2 January 2006

Zak B. Elep: [REVU] for 2005

Now here’s the thing I’m supposed to write ;) It took me quite some time since I was too busy sleeping ;) Looking back at 2005:
  • February :
  • March :
  • April :
  • May :
  • June :
  • July :
  • August :
  • September :
  • October :
  • November :
  • December :
  • Things to I want to do this 2006: Things to look out for: Rock on, 2006!

    31 December 2005

    Zak B. Elep: Going on, and on, and on

    Deep breathing here. Finally, sem break! Well, actually, it already is sem break since the 15th, and only now at the end of this month did I felt it. I was so busy doing things, both online and offline, that I didn’t even realize that the next sem will be up in the week after next. Sigh… So, what have I been doing since school’s close? Mostly Ubuntu work: Aside from hacking, I’ve been busy baking as well, and preparing for next sem. The only thing I’m quite missing is the “traditional” Halloween get-together with my High School batchmates; hell, it is only 1 year to go before a Homecoming… PS: My Ubuntu work has paid off. Thanks to Kamion, Seveas, smurf, dholbach, tseng, and ogra for voting me in as a Ubuntu Member! And to Jerome Gotangco and the Ubuntu-PH team: mabuhay kayo! :)

    18 October 2005

    Scott James Remnant: bzrk 0.1

    Over the weekend in a stunning example of timing, fabbione showed me a screenshot of gitk, the branch visualisation tool for git that he’s discovered while getting the Ubuntu kernel patches into a branch. He, quite rightly, was upset that we didn’t have anything like this for bzr. Well, we do now: It’s implemented as a bzr plugin, so you simply run “bzr visualise” (or “bzr viz”) in a working tree and the window opens to show you the history. Download here