понедельник, 28 января 2013 г.

XEN хранилище ручками ( для 6.* работает так же хорошо )


First there’s a default template in XenServer 5.6 which needs to be removed from the storage:

# xe vbd-list
uuid ( RO)             : f5c9f545-2019-7299-be87-fc7ef00be1e2
          vm-uuid ( RO): e2ad0921-dea8-5a1a-77e8-d3257fdcf48d
    vm-name-label ( RO): XenServer Transfer VM 5.6.0-31124p
         vdi-uuid ( RO): c3a8d327-2036-4ce2-9946-f0522f7572f4
            empty ( RO): false
           device ( RO): 
# xe template-uninstall template-uuid=e2ad0921-dea8-5a1a-77e8-d3257fdcf48d
The following items are about to be destroyed
VM : e2ad0921-dea8-5a1a-77e8-d3257fdcf48d (XenServer Transfer VM 5.6.0-31124p)
VDI: c3a8d327-2036-4ce2-9946-f0522f7572f4 (XenServer Transfer VM system disk) 
Type 'yes' to continue
yes
All objects destroyed
If you really needed that template, you don’t have it anymore. You’ll have to figure out how to get it back. I’m not sure what the purpose of that is for. It is by default installed on all new XenServer 5.6 images, so you should be able to export it from a fresh install and re-import it to fix, but I’m not going to offer instructions on how to do that, and haven’t tested it.

Next, find the uuid of the Local Storage SR:

# xe sr-list name-label="Local storage"
uuid ( RO)                : dacfea90-263e-0811-ab88-22f01b89b1b4
          name-label ( RW): Local storage
    name-description ( RW): 
                host ( RO): vmhost.example.com
                type ( RO): lvm
        content-type ( RO): user
Then find the PBD that is attached to that:

]# xe pbd-list sr-uuid=dacfea90-263e-0811-ab88-22f01b89b1b4
uuid ( RO)                  : daabdf71-641c-900b-3451-bd5c70675fab
             host-uuid ( RO): 23d8a9a0-a317-47a5-a1e6-858ab120b57b
               sr-uuid ( RO): dacfea90-263e-0811-ab88-22f01b89b1b4
         device-config (MRO): device: /dev/disk/by-id/scsi-36001c230bd1017000e4f2ee6554b21c8-part3
    currently-attached ( RO): true
Then unplug the PBD:

# xe pbd-unplug uuid=daabdf71-641c-900b-3451-bd5c70675fab
Now destroy the SR:

# xe sr-destroy uuid=dacfea90-263e-0811-ab88-22f01b89b1b4
Now you can create the SR. I’ve been using servers that have /dev/sda, so the storage partition is /dev/sda3. If you’re doing this on a SATA system (ick) you might have to use /dev/hda3 here, or on an HP probably /dev/cciss/c0d0p3. If you have FibreChannel or iSCSI-attached disk on a SAN you’re on your own to figure out what your block device is.

# xe sr-create content-type=user type=ext device-config:device=/dev/sda3 shared=false name-label="Local storage"
76ec3072-ae85-cd38-e363-34cf6b63d520
This command will take some time to return as it creates the SR.

You now probably want to tune the reserved space down on the ext3 partition to make more of it available. The filesystem reserves 5% of the storage to make block allocation and defragmentation more efficient, but you probably want to manage that yourself (set monitoring alarms at 95% and migrate VMs off if the storage gets above 95%).

The block device to tune is not /dev/sda3, but you can find it from df -k:

# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              4128448   3214896    703840  83% /
none                    384512         0    384512   0% /dev/shm
/opt/xensource/packages/iso/XenCenter.iso
                         44410     44410         0 100% /var/xen/xc-install
/dev/mapper/XSLocalEXT--76ec3072--ae85--cd38--e363--34cf6b63d520-76ec3072--ae85--cd38--e363--34cf6b63d520
                     279556112    191652 265163836   1% /var/run/sr-mount/76ec3072-ae85-cd38-e363-34cf6b63d520
Use tune2fs against that really ugly block device name to set the reserve to 0%:

# tune2fs -m 0 /dev/mapper/XSLocalEXT--76ec3072--ae85--cd38--e363--34cf6b63d520-76ec3072--ae85--cd38--e363--34cf6b63d520
tune2fs 1.39 (29-May-2006)
Setting reserved blocks percentage to 0% (0 blocks)
You should now be able to see the new “Local storage” device in XenCenter and can set it as the default storage location for new VMs. You will also see VHDs associated with your VMs showing up in the /var/run/sr-mount/[...etc...] directory.