Virtual Machines at the Far Cluster » History » Version 13
Virtual Machines at the Far Cluster¶
Install required packages.
Configure the network.
Create libvirt domain.
Base System Setup (Setup libvirt)¶
The base setup for the virtial machines includes kvm, libvirt, virt-manager and bridged network interfaces.
Install qemu-kvm, libvirt, libvirt-devel, virt-manager, bridge-utils
# yum install qemu-kvm libvirt libvirt-devel virt-manager bridge-utils
The vlans must be identified for the networks you wish to connect to. They must be statically trunked to the eth1 interface. In our case we want vlan 719: 188.8.131.52/24 and vlan 2015: 192.168.139.0/23.
You want to create a bridge network interface for each network. We created br719 for vlan 719 and br2015 for vlan 2015. Each bridge interface must be attached to an interface on the appropriate vlan.
Create the bridges network interface. One for each network. Create a network script file for each bridge interface.
/etc/sysconfig/network-scripts/ifcfg-vmbridge1 DEVICE="br719" TYPE="Bridge" BOOTPROTO=none DELAY=0
Create the vlan network interfaces. Create a network script file for each vlan interface. The native interface eth1 does not change.
/etc/sysconfig/network-scripts/ifcfg-eth1.719 VLAN=yes DEVICE=eth1.719 BOOTPROTO=none TYPE=Ethernet BRIDGE=br719
Bring every required bridge and vlan up. The bridge must be created first.
# ifup br719 # ifup eth1.719
libvirt is designed to work on a cluster of hosts. We try to follow that practice when setting up this single host to ease the expansion of the system.
|domain||virtual machine definition|
|volume||virtual machine hard disk|
|pool||storage location for volumes|
|define||makes libvirt aware of a domain, volume, or pool|
|start||start a domain or pool|
|autostart||start a domain or pool when libvirt is started|
Redhat places the qemu-kvm binary in a non-standard location
/usr/libexec/qemu-kvm. Add this to the path.
# export PATH="$PATH:/usr/libexec"
Kernel virtualization needs to be inabled by loading the approriate kernel module.
Start the libvirt daemon.
# service libvirtd start
All of libvirt can be configured throw xml and the virsh command. There is a plethora of documentation at the libvirt website . UUIDs are required for much of libvirt, the uuidgen command can create those.
The only required setup for libvirt is the domains. Storage pools, volumes, and networks can be configured to make domain creation easier and expandable.
Storage is handled in pools and volumes. Pools are locations that hold multiple volumes(e.g. /libvirt/images). Volumes are the virtual disk images(e.g. example.qcow2). Pools and volumes are not required but allow creation of new virtual machines and disks to be integrated with libvirt.
We defined the pool nova-pool at location /nova_services/nova-pool with a describing xml file: nova-pool.xml
<pool type="dir"> <name>nova-pool</name> <target> <path>/nova_services/nova-pool</path> <permissions> <mode>0750</mode> <user>107</user> <group>107</group> </permissions> </target> </pool>
Be sure to pay attention to the permissions. The mode allows user read, write, and execute with group read and write. The user and group are the uid and guid of the qemu user. The permissions must be set correctly to be able to create virtual machines with the virt-install tool.
The new pool must be defined, built, started, and autostarted. Defining will configure libvirt with the pool. Building will create the pool, in our case a directory. All of the commands except define use the pool name that is defined in the nova-pool.xml file.
# virsh pool-define nova-pool.xml # virsh pool-build nova-pool # virsh pool-start nova-pool # virsh pool-autostart nova-pool
The pools can be listed with:
# virsh pool-list --all Name State Autostart ----------------------------------------- nova-pool active yes
Check out http://libvirt.org/formatstorage.html for more information on pool configuration xml files.
libvirt can be configured to manage virtual machine networks through xml similar to domains, pools and volumes. We do not use this because our version of libvirt does not support bridge interfaces in this configuration. We must define our network in each domain. See http://libvirt.org/formatnetwork.html for more information on network configuration xml files.
Virtual machines are referred to as domains with the libvirt software.
Volumes (virtual disks)¶
libvirt refers to virtual disks as volumes. They are stored in pools.
Create a pool from an xml file.
<volume> <name>novaqa-far-app-01.qcow2</name> <capacity unit="G">10</capacity> <target> <path>/nova_services/nova-pool/novaqa-far-app-02.qcow2</path> <format type='qcow2'/> </target> </volume>
# virsh vol-create nova-pool novaqa-far-app-01.qcow2
List existing volumes:
#virsh vol-list --pool nova-pool Name Path ----------------------------------------- novaqa-far-app-01.qcow2 /nova_services/nova-pool/novaqa-far-app-01.qcow2 novaqa-far-db-01.qcow2 /nova_services/nova-pool/novaqa-far-db-01.qcow2 novaqa-far-db-02.qcow2 /nova_services/nova-pool/novaqa-far-db-02.qcow2
See http://libvirt.org/formatstorage.html#StorageVol for more information on volume xml files.
Domains can be created using previously defined volumes and networks. Our version of libvirt does not support defining networks with bridge interfaces but we still use defined volumes.
The xml file used to create novaqa-far-app-01 is below.
<domain type="kvm"> <name>novaqa-far-app-01</name> <title>novaqa-far-app-01</title> <description>Virtual Machine for the novaqa glassfih application server at the far detector</description> <os> <type arch="x86_64">hvm</type> <boot dev="hd" /> </os> <features> <acpi/> <apic/> </features> <vcpu>2</vcpu> <memory>4194304</memory> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type="file"> <source file="/nova_services/nova-pool/novaqa-far-app-01.qcow2" /> <target dev="hda" /> <driver name="qemu" type="qcow2" /> </disk> <interface type="bridge"> <source bridge="br719" /> <modle type="virtio" /> <mac address="78:17:bf:a7:26:c9"/> </interface> <interface type="bridge"> <source bridge="br2015" /> <modle type="virtio" /> <mac address="78:17:bf:a7:26:c8"/> </interface> </devices> </domain>
See http://libvirt.org/formatdomain.html for an in depth description of this file.
The key parts to change are the name, title, and description. Change the devices -> disk -> source file for your disk image. Modify the devices -> interface -> source bridge to match your bridge network interface.
New Virtual Machine Creation¶
The virt-manager package is required to ease the creation of virtual machines. Use the virt-install command to create and launch a vnc display for the virtual machine.
Example virt-install command:
virt-install --connect=qemu:///system --name=novaqa-far-app-01 --ram=4096 --arch=x86_64 -vcpus=2 --cpu host \ --description "Novaqa far application server." --cdrom=/tmp/ubuntu.iso --os-type=linux --boot hd \ --disk pool=temp-pool,size=10,format=qcow2 --bridge=br719,model=virtio,mac=78:17:bf:a7:26:c8 \ --bridge=br2015,model=virtio,mac=78:17:bf:a7:27:c8 --graphics vnc,password=LetMeIn,listen=0.0.0.0
Some key things to notice about this command is that --ram is n Megabytes and --disk size is in Gigabytes. The --cdrom option will me used to boot the virtual machine for the first boot only. On later boots the cdrom device will exist but there will be no image mounted. The --os-type option can be set to linux or windows. This will set some default features such as acpi and apic for the virtual machine. This can be left out. The network interfaces are declared with the --bridge options. The --bridge model=virtio is the driver to use. Virtio allows linux guest to perform better, it may work with windows if not just remove the model option or find a more appropriate model. The final --graphics vnc,password=LetMeIn,listen=0.0.0.0 creates a vnc display that will find the first open port starting at :5900. When using this method the vnc server will be started every time the virtual machine is running. The solution to remove this after install is explained later.
To find the vnc port the machine is displayed on run the following command with your virtual machine name:
# virsh vncdisplay novaqa-far-app-01 :1
This shows that the novaqa-far-app-01 domain is on port :5901. Use any vnc client to connect to the host machines ip address and port :5901 to install the virtual machine guest.
To remove the vnc the definition of the domain must be changed. The definition can be changed with the virsh edit command. It will open the definition with vi. Remove the <graphics /> line and save the definition. The domain must be restarted for this to take affect.
virsh edit novaqa-far-app-01 Remove the graphics line: <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd='LetMeIn'/> Save
If the domain is running it must be restarted.
# virsh shutdown novaqa-far-app-01 # virsh start novaqa-far-app-01
If the domain does not shutdown with the shutdown command then it is not respecting acpi. A shutdown can be forced with destroy. This will pull the virtual power cord.
# virsh destroy novaqa-far-app-01
The virtual machine domain should be completely setup. It can be accessed from any method installed within the guest or the vnc display can be left turned on.
There are a couple of things to that may cause confusion.
If the machine does not respond to the virsh shutdown command it is most likely missing the acpi virtual machine feature. This can be checked with the virsh dumpxml command.
# virsh dumpxml novaqa-far-app-01 <domain type='kvm' id='12'> ... <features> <acpi/> <apic/> </features> ... </domain>
If this is not displayed the acpi feature can be added with the virsh edit command. virsh edit enters vi for the domain.xml file. When saved changes will take affect after the virtual machine is restarted.
Virtual Machine Crashes¶
The virtual machine domain can be set to restart after a crash. If virt-install was used to create the guest this will be the default. The vish dumpxml command can be used to check this.
# virsh dumpxml novaqa-far-app-01 <domain type='kvm' id='12'> ... <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> ... </domain>
This can be modified with the virsh edit command. The domain must be restarted for the changes to take affect.