Project

General

Profile

Example checking filesystems for a virtual machine

There are two kinds of virtual machine formats that we deal with in FermiCloud, namely "raw" and "qcow2"
The vast majority are qcow2. A few are still raw. It is not uncommon that both kinds of VM's come up
with some kind of FS errors especially after a cold restart.

If you are checking the images in place it will be in /var/lib/one/datastores/<nnn>/<vmid>/disk.0
You can tell which kind it is by executing the "file" command.

IF the VM is still running

Virsh destroy one-xxxxx

Fixing the Raw

Kpartx -a ./disk.0
(this will make all the partitions of disk.0 available as /dev/mapper/loopnpm
where n is the loopback number device (usually 0) and m is the partition id (1-3)

fsck -f /dev/mapper/looo0p1
fsck -f /dev/mapper/loop0p2
fsck -f /dev/mapper/loop0p3

Then go to the open nebula head node --the VM should show in "poff" or "unkn" state
and can be started up again with onevm resume <vmid>.

Fixing the Qcow2

virt-rescue ./disk.0

this boots up a qemu-img that has the power to mount the partitions and/or to fsck them.

virt-list-partitions ./disk.0 can show how many partitions there are.

guestmount -a ./disk.0 -m /dev/sda1 /mnt/tmp

will mount the first partition on the host, in the /mnt/tmp mount point.

(and fusermount -u /mnt/tmp will unmount it again).

'
Virsh create fcdft0x2.xml
(from /etc/libvirt/qemu directory)

ST note--this should be linked from the FermiGrid side because it deals with static VM's.
The same can be done with a RAW image on FermiCloud, just replace the file name of the image file for the device file below.
For qcow2 images see the instructions about qcow2 images.