
There was an immediate need to experiment on a non-production HPVM (Hewlett Packard's name for their now-depreciated Itanium Virtual Machine nomenclature) in order to have questions answered for Sunday Maintenance, so I got started right away by adding an HP2120 SCSI-attached disk tray in which to house them - which completely masked the boot drives, no matter the setting. Referring back to The Great HP Virtual Machine MC/ServiceGuard Cluster Experiment of 2010, I noticed I'd used a Sun 711 12-pack (as clustering takes lots and lots of disks). So why wouldn't the 2120 work? Just in case, I installed an old Apdaptec 29160 which has drivers for every known operating system on the planet - except HP/UX (of course). Further investigation (and an enlightening conversation with the UNIX MAN BEAST ERNEST) revealed:
•SCSI ID:
• May not use SCSI ID 2 when a drive is installed in internal bay 2
• May use SCSI ID 2 for the external port if there is no drive in bay 2
I decided to remove all but a single boot drive, so first had to reduce the logical volume, the volume group, and ultimately, the physical volume to pull the drives.
[/] root@belanna# vgdisplay -v vg00
--- Physical volumes ---
PV Name /dev/disk/disk6_p2
PV Status available
Total PE 4319
Free PE 0
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk4
PV Status available
Total PE 4375
Free PE 4375
Autoswitch On
Proactive Polling On
PV Name /dev/disk/disk5
PV Status available
Total PE 4375
Free PE 4375
Autoswitch On
Proactive Polling On
[/] root@belanna# vgreduce /dev/vg00 /dev/disk/disk5
Volume group "/dev/vg00" has been successfully reduced.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
[/] root@belanna# vgreduce /dev/vg00 /dev/disk/disk4
Volume group "/dev/vg00" has been successfully reduced.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
[/] root@belanna# pvcreate -f /dev/rdisk/disk5
Physical volume "/dev/rdisk/disk5" has been successfully created.
[/] root@belanna# pvcreate -f /dev/rdisk/disk4
Physical volume "/dev/rdisk/disk4" has been successfully created.
[/] root@belanna# rmsf -a /dev/rdisk/disk5
[/] root@belanna# rmsf -a /dev/rdisk/disk4
I then removed a drive and shuffled the boot drive to a different slot:
root@belanna# setboot -p /dev/disk/disk6_p2
Primary boot path set to 0/1/1/0.0.0 (/dev/disk/disk6_p2)
Despite this, I still couldn't get the disk tray to be seen! I first swapped cables, then swapped the HP2120 for a Sun StorEdge S1. When neither of those substitutions worked, I delved into SCSI, as it has been many, many years since hands-on hardware. Turns out, if you use LVD drives, the S1 self-terminates. Sure enough, as soon as I pulled the terminator, the EFI picked them right up (Note to self, the S1 requires a reboot if changing Base SCSI ID, and an EFI `reset` command will re-read the devices).
Voilà, super-fast boot drives, and a half terabyte of storage for hosting the HPVMs!
Then I mirrored the boot drives and created a 2-disk (distributed) volume for the vm's to call home. The only thing I couldn't quite figure out why some commands wouldn't allow you to run them on the agile-view devices, or which ones required it. That part was a bit of a kluge!
[/] root@belanna# pvcreate /dev/rdisk/disk10 /dev/rdisk/disk11
[/] root@belanna# mkdir /dev/vgHPVM
[/] root@belanna# vgcreate -s 8 -g DISTRIB /dev/vgHPVM /dev/disk/disk10
[/] root@belanna# vgextend -g DISTRIB /dev/vgHPVM /dev/disk/disk11
[/] root@belanna# lvcreate -D y -s g -L 51200 -n lvvmtest /dev/vgHPVM
[/] root@belanna# mkfs /dev/vgHPVM/lvvmtest
I did lose an entire bank of memory if anyone out there has four, 1GB sticks of PC2100-R. She's currently running at 8GB down from 12. Although some versions of the rx2600 can use 2GB sticks for a max of 24GB, I'm not sure on this one.
[/iso] root@belanna# hpvmmodify -P vmlinux -a dvd:scsi::file:/iso/SLES-11-SP3-DVD-ia64-GM-DVD1.iso
