KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions Intel VT or AMD-V. How do I install KVM under CentOS or Red Hat Enterprise Linux version 5.5?
The Linux kernel 2.6.20 and above included KVM. RHEL 5.5 (and upcoming RHEL 6) supports KVM out of box and it has also been ported to FreeBSD as a loadable kernel module. However, this tutorial is tested on both CentOS and RHEL 5.5 only running 64 bit Intel Xeon CPU (with Intel VT) and 64 bit kernels with SELinux running in enforcing mode.
XEN allows several guest operating systems to execute on the same computer hardware and it is also included with RHEL 5.5. But, why use KVM over XEN? KVM is part of the official Linux kernel and fully supported by both Novell and Redhat. Xen boots from GRUB and loads a modified host operating system such as RHEL into the dom0 (host domain). KVM do not have concept of dom0 and domU. It uses /dev/kvm interface to setup the guest operating systems and provides required drivers. See the official wiki for more information.
You must install the following packages:
- kmod-kvm : kvm kernel module(s)
- kvm : Kernel-based Virtual Machine
- kvm-qemu-img : Qemu disk image utility
- kvm-tools : KVM debugging and diagnostics tools
- python-virtinst : Python modules and utilities for installing virtual machines
- virt-manager : Virtual Machine Manager (GUI app, to install and configure VMs)
- virt-viewer: Virtual Machine Viewer (another lightweight app to view VM console and/or install VMs)
- bridge-utils : Utilities for configuring the Linux Ethernet bridge (this is recommended for KVM networking)
KVM Package Group
RHEL comes with KVM software group which includes full virtualization support with KVM. You can list all packages in the group as follows:
# yum groupinfo KVM
Loaded plugins: rhnplugin, security Setting up Group Process Group: KVM Description: Virtualization Support with KVM Mandatory Packages: celt051 etherboot-zroms etherboot-zroms-kvm kmod-kvm kvm kvm-qemu-img qcairo qffmpeg-libs qpixman qspice-libs Default Packages: Virtualization-en-US libvirt virt-manager virt-viewer Optional Packages: celt051-devel etherboot-pxes etherboot-roms etherboot-roms-kvm gpxe-roms-qemu iasl kvm-tools libcmpiutil libvirt-cim qcairo-devel qffmpeg-devel qpixman-devel qspice qspice-libs-devel
A Note About libvirt
libvirt is an open source API and management tool for managing platform virtualization. It is used to manage Linux KVM and Xen virtual machines through graphical interfaces such as Virtual Machine Manager and higher level tools such as oVirt. See the official website for more information.
A Note About QEMU
QEMU is a processor emulator that relies on dynamic binary translation to achieve a reasonable speed while being easy to port on new host CPU architectures. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests. See the official website for more information.
A Note About Virtio Drivers
Virtio is paravirtualized drivers for kvm/Linux. With this you can can run multiple virtual machines running unmodified Linux or Windows VMs. Each virtual machine has private virtualized hardware a network card, disk, graphics adapter, etc. According to Redhat:
Para-virtualized drivers enhance the performance of fully virtualized guests. With the para-virtualized drivers guest I/O latency decreases and throughput increases to near bare-metal levels. It is recommended to use the para-virtualized drivers for fully virtualized guests running I/O heavy tasks and applications.
Host Operating System
Your main operating system such as CentOS or RHEL is known as host operating system. KVM is a Linux kernel module that enables a modified QEMU program to use hardware virtualization. You only need to install KVM under host operating systems.
It is nothing but a guest operating system running under host operating system. Each kvm domain must have a unique name and ID (assigned by system).
Guest Operating Systems
KVM supports various guest operating systems such as
- MS-Windows 2008 / 2000 / 2003 Server
- MS-Windows 7 / Vista / XP
- Sun Solaris
- Various Linux distributions.
- MS DOS
- Amiga Research OS
Type the following command to install KVM under RHEL or CentOS:
# yum install kvm virt-viewer virt-manager libvirt libvirt-python python-virtinst
# yum groupinstall KVM
Important Configuration And Log Files (Directories) Location
The following files are required to manage and debug KVM problems:
- /etc/libvirt/ – Main configuration directory.
- /etc/libvirt/qemu/ – Virtual machine configuration directory. All xml files regarding VMs are stored here. You can edit them manually or via virt-manager.
- /etc/libvirt/qemu/networks/ – Networking for your KVM including default NAT. NAT is only recommended for small setup or desktops. I strongly suggest you use bridged based networking for performance.
- /etc/libvirt/qemu/networks/default.xml – The default NAT configuration used by NAT device virbr0.
- /var/log/libvirt/ – The default log file directory. All VM specific logs files are stored here.
- /etc/libvirt/libvirtd.conf – Master libvirtd configuration file.
- /etc/libvirt/qemu.conf – Master configuration file for the QEMU driver.
By default libvirt does not opens any TCP or UDP ports. However, you can configure the same by editing the /etc/libvirt/libvirtd.conf file. Also, VNC is configured to listen on 127.0.0.1 by default. To make it listen on all public interfaces, edit /etc/libvirt/qemu.conf file.
Our Sample Setup
+-------------> vm#1 ( 10.10.21.71 / 220.127.116.11, CentOS MySQL Server) | +-------------> vm#2 ( 10.10.21.72 / 18.104.22.168, FreeBSD 7 Web Server) LAN --> Switch --> eth0 --> -+ 10.10.21.70 | | ---> br0 -+ +----------------+ +-------------> vm#3 ( 10.10.21.73 / 22.214.171.124, OpenBSD 4.x Firewall ) | | ===> | RHEL Server | -----+ | | | KVM | +-------------> vm#4 ( 10.10.21.74 / 126.96.36.199, Solaris 10 Testing Server ) | ---> br1 -+ +----------------+ | Wan --> ISP Router --> eth1 --> -+ 188.8.131.52 +-------------> vm#5 ( 10.10.21.71 / 184.108.40.206, Windows Server Testing Server ) | +-------------> vm#6 ( 10.10.21.71 / 220.127.116.11, RHEL Mail Server )
(Fig.01: Our sample server setup – you need to scroll to see complete diagram)
- OS – RHEL / CentOS v5.5 is our host operating system.
- Host has two interface eth0 and eth1
- LAN – eth0 with private ip
- Internet – eth1 with public IPv4/IPv6 address.
- Disk – 73×4 – 15k SAS disk in RAID 10 hardware mode. All VMs are stored on same server (later I will cover SAN/NFS/NAS configuration with live migration).
- RAM – 16GB ECC
- CPU – Dual core dual Intel Xeon CPU L5320 @ 1.86GHz with VT enabled in BIOS.
Virtual Machine Configuration
- Bridged mode networking (eth0 == br0 and eth1 == br1) with full access to both LAN and Internet.
- Accelerator virtio drivers used for networking (model=virtio)
- Accelerator virtio drivers for disk (if=virtio) and disk will show up as /dev/vd[a-z][1-9] in VM.
- Various virtual machines running different guest operating systems as per requirements.