Finally user-friendly virtualization for Linux

The upcoming 2.6.20 Linux kernel is bringing a nice virtualization framework for all virtualization fans out there. It's called KVM, short for Kernel-based Virtual Machine. Not only is it user-friendly, but also of high performance and very stable, even though it's not yet officialy released. This article tries to explain how it all works, in theory and practice, together with some simple benchmarks.

A little bit of theory

There are several approaches to virtualization, today. One of them is a so called paravirtualization, where the guest OS must be slightly modified in order to run virtualized. The other method is called "full virtualization", where the guest OS can run as it is, unmodified. It has been said that full virtualization trades performance for compatibility, because it's harder to accomplish good performance without guest OS assisting in the process of virtualization. On the other hand, recent processor developments tend to narrow that gap. Both Intel (VT) and AMD (AMD-V) latest processors have hardware support for virtualization, tending to make paravirtualization not necessary. This is exactly what KVM is all about, by adding virtualization capabilities to a standard Linux kernel, we can enjoy all the fine-tuning work that has gone (and is going) into the kernel, and bring that benefit into a virtualized environment.

Under KVM's model, every virtual machine is a regular Linux process scheduled by the standard Linux scheduler. A normal Linux process has two modes of execution: kernel and user. KVM adds a third mode: guest mode (which has its own kernel and user modes).


KVM consists of two components:

  • a device driver for managing the virtualization hardware; this driver exposes its capabilities via a character device /dev/kvm
  • a user-space component for emulating PC hardware; this is a lightly modified QEMU process

QEMU is a well known processor emulator written by French computer wizard Fabrice Bellard.

KVM in practice: Windows XP as a guest OS

Although KVM is still in development, I decided to play a little bit with it. I used 2.6.20-rc2 kernel, together with already available Debian packages: kvm and qemu. So, after I have recompiled the kernel and installed packages, everything was ready. I suppose that this, together with the fact than no proprietary software or binary kernel modules is needed, explains why I call it user-friendly.

But, there's more, see how easy it's to install (proprietary!) guest OS:

qemu-img create hda.img -f qcow 6G
kvm -no-acpi -m 256 -cdrom winxpsp2.iso -hda hda.img -boot d

First step was to create a virtual disk drive (a simple file on the host OS). I have chosen QEMU's copy-on-write format. This means the file will grow as needed, not taking much disk space, unless it's really needed (up to maximum 6GB). With the virtual drive ready, the installation could begin. Actually, I did one more (non-mandatory) step before those two, I dumped the installation CD on the disk so that the installation goes faster because disk can seek much faster than CD/DVD device (a simple cp /dev/cdrom image.iso should do the job). I used the -no-acpi switch, because ACPI support in QEMU is still experimental and there were some problems with the Windows installer.

And that's it, I suppose it doesn't get simpler than that, in short time I had Windows installed and running. Why Windows, some of you may ask? Well, I couldn't find a reason to have another Linux virtualized under this one, at the moment. Also, I always wanted to have a handy virtualized Windows environment for some experimental purposes. Not dual-booting it, which is PITA for everyday use, but something that could be easily started from time to time. For example to see how this web page looks in IE7, and stuff like that...

Some benchmarking

OK, so with Windows XP installed in no time, I had plenty of time left to do some simple benchmarks. Nothing comprehensive or scientific, I remind you, just a few quick tests so that you have rough idea how KVM works in practice. I've also included few other interesting cases, because it was easy to do. Once I had installed Windows OS, it could be run even under unmodified QEMU, no problem. And with a little additional effort I also compiled kqemu, QEMU accelerator module written by QEMU's original author, unfortunately a closed-source product. Finally, the choice of the applications that I ran has fallen to only two of them, PCMark2002 and Super PI (ver 1.1e) with the sole reason that I had numbers from those two applications from the times when I had Windows XP installed natively (but that was few months ago, and I have since deleted it). Before I forget, the tests were run on an Intel E6600 processor.

[img_assist|nid=793|title=|desc=|link=none|align=none|width=260|height=250] [img_assist|nid=794|title=|desc=|link=none|align=none|width=260|height=250]

I think it's pretty obvious how much improvement both kqemu and KVM bring over QEMU emulator alone. Also, it seems that kqemu still commands a slight lead over KVM, but I'm sure that KVM performance will only improve over time, it's really young compared to all other virtualization products.


Running Super PI is another story, KVM is the fastest one here, running at 84% of the native speed, which is a great result. Stock QEMU is so slow at this task, that graph above is hard to decipher, so I'll list all the results here (time to generate first million digits of pi, less is better): QEMU: 492.5 sec, kqemu: 28.5 sec, KVM: 25.5 sec, native: 21.5 sec.


While still in the early development stages, KVM shows a real potential. It's fun working with it, and I suppose we'll hear more and more good news about it in the following months. At the time when this technology gets incorporated in the mainstream Linux distributions (in not so distant future) virtualization will become a real commodity. And not only for data centers and server consolidation purposes, but also on Linux desktops everywhere. Mostly thanks to a really great work on behalf of QEMU & KVM developers. But, you can start playing with it today...

KVM: Kernel-based Virtual Machine for Linux
QEMU: open source processor emulator
QEMU Accelerator Module
KVM: the original announcement on the linux-kernel list

PDF icon kvm_whitepaper.pdf118.89 KB
Image icon xp_under_qemu.png114.08 KB


Since you're obviously running a system with Vanderpool/Pacifica capabilities, I'd have loved it if you had installed Xen and compared performance of Windows under Xen with KVM.

Oh, and can we have the specifications of the hardware on which you performed the test ?

> Since you're obviously running a system with Vanderpool/Pacifica capabilities, I'd have loved it if you had installed Xen and compared performance of Windows under Xen with KVM.

Yes, that's something that I would like to do, too. But, from what I hear, Xen setup can be complicated, so I started with KVM to be on the safe side. :) I suppose, I'll write another review if and when I manage to run Windows under Xen.

> Oh, and can we have the specifications of the hardware on which you performed the test ?

Sure. CPU is Intel Core 2 Duo model E6600 (2.4GHz/4MB), there's 1GB of DDR400 RAM, the virtual disk is on the software mirror (MD RAID1 device) that runs on two Seagate ST3120026A drives (120GB). Finally, all that equipment is plugged into a very interesting ASRock 775Dual-VSTA motherboard, described in detail here.

Well I encourage you to try out Xen as soon as possible, and to post a performance comparison. Debian already has Xen packages, as well as Debian specific tools and graphical Xen management tools

Fedora is much better and its graphical management tool is the best you can get

Quoted from the link your provided:

What is the install location? This is the path to a Fedora Core 6 installation tree in the format used by anaconda. NFS, FTP, and HTTP locations are all supported. Examples include:
Installation must be a network type. It is not possible to install from a local disk or CDROM. It is possible, however, to set up an installation tree on the host OS and then export it as an NFS share.

This is not what I can call easy to use, most people use ISO to install VM ...
At the moment Xen is not very users friendly, and the virt tools are very basics. That's certainly why it is not included or planned to be included in the Linux Kernel at the moment.

"This is not what I can call easy to use, most people use ISO to install VM ..."

Where is your stats for that? You can very well use ISO's to install a VM, only within a vm in the graphical console is currently not supported. Do you have anything better than Fedora's virt-manager on Xen for any distribution? If not, my assertion earlier is perfectly correct.

I'm trying to share the virtual disks assigned between two guests.

Can anybody know how to share disks between guests.

Any doc or steps ??



I found some posts about things like this, but they can be out of date, so i post it again.
I'm planning to use virtualization on my server (i have Intel-VT capable cpu), and i would be happy to get out of as much 3D acceleration as can be.
If anyone tried to do the same thing, please help me out with your experiences. I have read so many articles about virtualization softwares, my first try would be KVM,
but there are so many of them...

Thank you,

I installed xen on FC6. Actually it is not that difficult. If any one of you wants to do the job - I can provide a small info on it!

I try to install a Windows OS in paravirtualized fedora core 6 with xen, but I have a problem:
How I can install Windows guest OS from dvd?

Enabling Xen in Debian is damn easy and straightforward.

You only need to run:
apt-get install xen-linux-system-2.6.18-3-xen-686
and reboot.

The xen entry is already added to Grub and selected by default, so your system will be boot under the Xen hypervisor.

Then it's really easy to create new images using the "xen-tools" package.

Plus, you can accelerate that CPU up to 3.8 Ghz with no problem. E6600 architecture allow it. For example, i've e6400 accelerated to 3.0Ghz with standard cooler

I suppose I could, although I stopped overclocking after the era of Intel Celeron 366 (which run completely stable when overclocked to 550).

I don't know why, beside the fact that processors have become so much powerful in the meantime. :)

Actually, I had two of the above mentioned CPU's on a yet another interesting and world famous board: Abit BP6. Noncertified, but very cheap SMP of that days. Does anybody here remember that board? ;)

Yes... I ran on a 366@550 dual celeron BP6 for quite a few years and only last year retired this board in my linux server in favor of a P-4 2.4 Ghz based intel board. A lot of fun and was cool to show my friends SMP system before any of them knew how to build it or could afford it.

Thanks for the article on KVM. Screenshots please!?

-L Mulder

Screenshot finally available, sorry for the delay...

What about VMware benchmarks? Anyone got any of those?

that would be really nice

I would love to, but VMware is not cooperating:

Extracting the sources of the vmmon module.

Building the vmmon module.

Using 2.6.x kernel build system.
make: Entering directory `/tmp/vmware-config0/vmmon-only'
make -C /usr/src/linux/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules
make[1]: Entering directory `/usr/src/linux'
  CC [M] /tmp/vmware-config0/vmmon-only/linux/driver.o
In file included from /tmp/vmware-config0/vmmon-only/linux/driver.c:88:
/tmp/vmware-config0/vmmon-only/./include/compat_kernel.h:31: error: expected declaration specifiers or '...' before 'compat_exit'
/tmp/vmware-config0/vmmon-only/./include/compat_kernel.h:31: error: expected declaration specifiers or '...' before 'exit_code'
/tmp/vmware-config0/vmmon-only/./include/compat_kernel.h:31: warning: type defaults to 'int' in declaration of '_syscall1'
make[2]: *** [/tmp/vmware-config0/vmmon-only/linux/driver.o] Error 1
make[1]: *** [_module_/tmp/vmware-config0/vmmon-only] Error 2
make[1]: Leaving directory `/usr/src/linux'
make: *** [vmmon.ko] Error 2
make: Leaving directory `/tmp/vmware-config0/vmmon-only'
Unable to build the vmmon module.

I'll keep trying...

... publishing benchmarks is forbidden by VMware EULA. :(

Mini HOWTO: VMware Workstation on 2.6.20

  • During installation, don't opt to run (answer: no), it wouldn't work anyway
  • Switch to $libdir/modules/source directory
  • Create backup copies of vmmon.tar and vmblock.tar (in case something goes wrong)
  • Untar both archives
  • In both archives, edit file include/compat_kernel.h
  • Delete line 31 of the file: static inline _syscall1(int, compat_exit, int, exit_code);
  • Tar the contents again
  • rm -rf vmmon-only vmblock-only
  • You can now run and it should work

line number is 21 not 31

It would be nice to know which AMD processors are "SVM capable processors".

Thanks for that clear and concise introduction to KVM--I can't wait to try it out!

I would appreciate it if you could provide some clarification about hardware. You mentioned that some newer processors feature hardware support for virtualization. Is this hardware support a requirement for using KVM?

Again, kudos to you for the great article.

> Thanks for that clear and concise introduction to KVM--I can't wait to try it out!

I'm glad you find it useful, that was the whole idea, to showcase the new technology.

> I would appreciate it if you could provide some clarification about hardware.

You can find more detailed specification of the hardware used in one of the comments, sorry about that.

> You mentioned that some newer processors feature hardware support for virtualization. Is this hardware support a requirement for using KVM?

Yes, to run KVM, you absolutely need to have either Intel's Core 2 Duo or AMD's Athlon 64 X2 processor. I should've explained that better in the article. OTOH, you can start playing with the stock QEMU now, it's not picky about the underlying hardware. :)

Yes, to run KVM, you absolutely need to have either Intel's Core 2 Duo or AMD's Athlon 64 X2 processor. Uh no. I have a Core Duo T2600 (not core 2) and I have VT. I've used it, in fact (I'm installing Windows 2000 right now, or trying to.)

You need either a processor with virtualization support (any of svm (AMD) or vt (Intel).

Except that the virtualization needs to be enabled by BIOS. Note that there are BIOS version that do not support it.


please provide some screen shots. Is it possible to tell us the diff b/n Xen and KVM for regular users i.e can we switch b/n the Oses by ALT+TAB etc.

You're right, everybody loves screenshots :), I'll provide one soon.

Switching is really easy, you can have guest OS windowed on the host's display or in full screen mode (toggled by CTRL-ALT-f). Also, CTRL-ALT key combination grabs/ungrabs keyboard/mouse when in windowed mode. You get used to these bindings quite fast and it works well in practice.

Screenshot finally available, sorry for the delay...

Hi, thanks for that article, it covers up many things I wanted to know about KVM.

How are handled storage devices in the guest OS ?
Like, do you configure /mnt/drive from the host to be represented as D: in the guest ?

>This means the file will grow as needed, not taking much disk space, >unless it's really needed (up to maximum 6GB)

Is this only for the OS partition, or is it the whole storage space available while using guest OS ?

Last one, how does windows handle hardware drivers while being a guest OS. For example, if I use a wifi dongle handled by the host OS as a module, how will it be handled by the guest OS (immediate use ? drivers installation required in guest OS ? unavailability ?) ?

The virtual disk mentioned in the article is exactly that, seen as a whole disk from guest OS. So it needs to be partitioned etc...

QEMU provides a plethora of useful formats of this file (this is host's point of view!):

fmt is the disk image format. It is guessed automatically in most cases. The following formats are supported:

Raw disk image format (default). This format has the advantage of being simple and easily exportable to all other emulators. If your file system supports holes (for example in ext2 or ext3 on Linux), then only the written sectors will reserve space. Use "qemu-img info" to know the real size used by the image or "ls -ls" on Unix/Linux.

QEMU image format, the most versatile format. Use it to have smaller images (useful if your filesystem does not supports holes, for example on Windows), optional AES encryption and zlib based compression.

User Mode Linux Copy On Write image format. Used to be the only growable image format in QEMU. It is supported only for compatibility with previous versions. It does not work on win32.

VMware 3 and 4 compatible image format.

Linux Compressed Loop image, useful only to reuse directly compressed CD-ROM images present for example in the Knoppix CD-ROMs.

I forgot to say, THANK YOU very much.

Xen chose paravirtualization. Good to have another option that does pure virtualization instead ,on linux.

Xen does support virtualization using the same instruction sets that KVM uses. In fact, it also uses a modified version of Qemu to emulate the rest of the system hardware.

Very nice article, unfourtunately my CPU does not support KVM.

How is the hardware virtualized ?, I mean, how does your M$ XP see your ethernet card ? or your pendrive/floppy ? are these devices supported ?

Thanks for any comment.

Taken from the manual page:

The QEMU PC System emulator simulates the following peripherals:

  • i440FX host PCI bridge and PIIX3 PCI to ISA bridge
  • Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions (hardware level, including all non standard modes).
  • PS/2 mouse and keyboard
  • 2 PCI IDE interfaces with hard disk and CD-ROM support
  • Floppy disk
  • NE2000 PCI network adapters
  • Serial ports
  • Creative SoundBlaster 16 sound card
  • ENSONIQ AudioPCI ES1370 sound card
  • Adlib(OPL2) - Yamaha YM3812 compatible chip
  • PCI UHCI USB controller and a virtual USB hub.

SMP is supported with up to 255 CPUs.

So basically, I can't get all the power from my GeForce 6600 in the guest OS. It will always think I'm using the Video Card described.

That sucks... I was thinking of using KVM to be able to play WinXP games without needing to reboot... guess i'll wait.

Unfortunately not!

Having D3D or OpenGL apps virtualized is still a wet dream. :(

From what I hear, some kind of 3D proxy between host & guest would be needed to have your games run in the virtualized environment.

Actually I guess this kind of virtualization is not what I am looking for.

What I'm dreaming about is a dedicated (and very light) Host OS that just manage hardware ressources distribution between its guests, so that each Guest uses its own drivers to access to hardware, while the Host just makes sure all of its guests are served with their respective needs as far as possible.

Do you think of this a feasable ?

That's called a Playstation3, I suppose: a machine with a lot of processor and graphics power that runs a hypervisor. This hypervisor can run a lightweight environment that runs games (regular PS3), or another OS compiled for the PowerPC architecture (such as Fedora for PPC). This OS will see virtualized hardware, but will still have access to the Cell processors that can do the heavyweight lifting that graphics need.

Linux, with minimal X windows (all you need is the server; your basic clients like xterm, xkill, etc; and your chosen virtualization program) and your very basic applications (say, busybox, various fsck programs, fdisk, and maybe the md tools) would get you a platform for virtualization. You could use vmware server if you don't have VT or equivalent, it would be the best option and it is is also free. But if you do, then kvm is or will be the best option. IIRC you can use raw filesystems with vmware so you could use vmware now and kvm later as it matures (I'm doing a windows 2000 installation in kvm right now and it hangs without the -no-kvm switch - I've read that it will work with VT after the install.) This is pretty much the approach taken with vmware ESX server...

Somewhat ironically, some of the higher-end VMWare products do more of this, although you'll be a little disappointed with two applications trying to manage the hardware at the same time. It's usually better to have one 'real' OS per PC.

More useful and interesting would be to write device drivers that can be called from a variety of operating systems,
such as a Linux or *BSD Kernel, GNU Hurd Kernel (user mode?), and Windows.

At one time in the distant past I was responsible for maintaining VM/370 on all machines
at an IBM Lab.

I have been contemplating writing my own X86-64 light weight hypervisor.
The following is a cut and paste of a small portion of my notes that seem to
address what you are asking about.

The Section from my notes follows.


Dedicate a sub set of the physical hardware to a virtual machine.

As an example let us consider the following physical machine:

* Dual core AMD 64
* 2 Giga-bytes main memory
* Two network cards
* 2 Hard Disks:
o Windows-XP (i.e. 32-bit OS) installed (and functioning) on disk 1.
o Linux 32-bit installed on disk 2.

Booting Windows or Linux is determined by the boot drive as defined by the BIOS i.e. to boot the non active OS the machine has to be restarted, BIOS entered and the active boot disk changed.

A Hypervisor (using SVM) can be booted (from a USB memory stick or CD/DVD rom) In the hypervisor virtual machines can be defined:

* Virtual machine one
o Main memory is 1st Giga-byte of physical memory.
o physical disk 1 is virtual disk 1.
o physical network card 1 is virtual network card 1.
* Virtual machine two
o Main memory is 2nd Giga-byte of physical memory.
o physical disk 2 is virtual disk 1.
o physical network card 2 is virtual network card 1.

The hypervisor can then be told to boot first one and then (while the 1st virtual machine is still running) told to boot the second virtual machine.

The hypervisor allows the kernels of the virtual machines to run at ring-0 privilege level (which is not the norm).
Refinement (Assumptions)

For the above to (stand a chance of) work(ing) some additional requirements have to be met:

* The network cards have to be given unique (i.e. different) IP numbers e.g:
o Windows uses
o Linux uses
* Linux has to be configured to NOT use the graphics card (nor the system console).

Using this environment

The performance hit incurred by virtualization is almost nothing in this scenario.

There is however a performance hit:

* Each virtual machine believes that only 1 CPU exists in the machine. i.e. performance is only that of a single core machine.
* Each virtual machine believes that main memory is half the size that it actually is (thus there probably will be more paging).

Everything should work the same as it does on the dedicated machine with the above mentioned performance reservations.


To be able to work at all on linux we have to make a connection to it. This could be achieved by using telnet on the Windows virtual machine


Although the above is only theory for the AMD 64 the equivalent on IBM-370 hardware functioned on VM/370:

* There was no virtual tape drive a physical tape drive had to be attached to ONE virtual machine and that virtual machine could then use directly (i.e. without hypervisor intervention) that physical tape drive.


* Hypervisor intervention in the above scenario is ZERO. This is revelant for more than one reason:
o Performance hit (from the SVM) is ZERO
o Probably more important is that the amount of code in at the SVM level is minimal.


* There are a number of issues that have intentionally not been addressed by the above scenario because:
o It only works in theory (as a next step we need a proof of concept - or to see why the theory does not work or bring the advantages expected of it.
o The goal was to keep the definition of the theory as simple as possible.
o It is believed that resolving the issues not yet addressed can be achieved with little additional effort at the SVM level. The section Shortcommings below deals with the foreseen issues (and during a proof of concept may be extended).


The hypervisor/SVM requires a few resources:

* Some main memory
* A share of a CPU

USB subsystem requires a hypervisor level functionality i.e.

An example of what needs to be prevented is;
a USB device is plugged in and that information is passed on (unfiltered by the hypervisor) to one or more of the virtual machines.

Just slightly off-topic -- but I installed xen under FC6 and it works w/ windows xp. As far as I know, xen could (theoretically) pass pte pci card directly into the virtual machine. So I bought a motherboard w/ 2 PCE-e slots (SLI) and plan to install a second pci-e adapter and see if it works. Not now though.

3D in a VMware virtual machine has been available for Linux and Windows for 2 years, and will soon be available for Mac OS as shown in this video: 3D Graphics in VMware Fusion for Mac OS X

Well, that's really great news! Although it seems that Fusion is OS X only, and still in beta, so we'll have to wait a little bit before their virtualized 3D is universally available. But, still, thumbs up for VMware.

To enable a virtual machine for accelerated 3-D

1. Choose a virtual machine with Windows 2000 or XP guest operating system.

Note: Do not enable Direct3D on a virtual machine that is powered on or suspended.

2. Add the following to the configuration (.vmx) file for the virtual machine:

mks.enable3d = TRUE

This line enables accelerated 3-D on the host. It is required to support accelerated 3-D in the guest and also enables the host to accelerate 2-D portions of the guest display.

3. You may also add one or both of the following optional lines:

svga.vramSize = 67108864

This line increases the amount of VRAM on the virtual display card to 64 MB. Adding more VRAM helps to reduce thrashing in the guest. The maximum value is 128 MB.

vmmouse.present = FALSE

This line disables the absolute pointing device in the guest. Applications which require DirectInput relative mode need to turn off the absolute pointing device in the guest. In practice, this is only required for a certain class of full screen 3-D applications (for example, real-time games like first-person shooters).

Note: If you set the vmmouse.present option, you should also turn off the preference for motion ungrabbing in the Input tab of the Preferences settings dialog.

To turn off ungrabbing for vmouse.present:

a. Choose Edit > Preferences.

b. Click Input.

c. Deselect Ungrab when cursor leaves window

In response to getting 3d acceleration in virtual machines (VMWare & others) I've stumbled on this driver from which does just this. Browse on their page to find it but apparently one may get full opengl driver level acceleration which in theory would make it possible to play those windoze games, at least the ones that support opengl. There is a link there to a video of just this. It appears to be Unreal2 OR UT2k4.

Both of which have native linux binaries.

One reason to use the Raw disk image format (the default) is that it can then be mounted with the command:

mount -o loop,offset=32256 your-disk-images-name.img /mnt

This allows you to easily copy files from your system into (and out of) the disk image (the offset is so that the boot sector of the image is not mounted).

Using the raw disk image format does not result in larger files than the other formats (except cloop which is a compressed format) as the reserved space is simply recored as "next follows a string of x million zeros" rather than actually writing x million zeros. This is true for Reiserfs, ext2, ext3 and other Linux filesystems, but apparently not true for Windows.

"Finally user-friendly virtualization for Linux"

Weird title. Virtualization for Linux has been fast and user-friendly since 1999, with VMware Workstation for Linux. OK, it was not free-as-in-beer. But VMware remedied that in 2006 when they announced VMware Player and VMware Server for Linux.

VMware is a very good Linux citizen, who contributes patches to the Linux kernel, X, GTK+, and open source their UI libraries (libview, ...). They have an easy-to-use kick-ass product. Not even mentionning them in your article shows an incredible bias.

You should at least give credit where credit is due: KVM copies the VMware hosted virtualization architecture, except that unlike VMware, it only works on very recent hardware. User-friendly indeed...

First of all, I do admit that VMware is a good product. There, I said it. :)

OTOH, this article is NOT about VMware, it is about KVM, a new emerging technology. It is about sharing info about it, helping people to get started with it, and showcasing what it can do for you.

Now, I understand that you have some reasons to love and defend VMware, but I'm really not attacking any other virtualization product, I'm just amazed with the KVM's fast progress and it's simplicity which I expressed with the coin "user-friendly". A few other people have found that as a strange wording, but I know why I said it and I stand behind it.

"...KVM copies VMware..." - KVM copies nothing. KVM is a completely different project (AFAIK, not connected in ANY way to VMware) developed by fine people who are giving their hard work for free. I admire that fact, and I admire their knowledge and what they've accomplished in such a short time.

Now, I'll tell you why I think KVM is (actually, will be when it matures and becomes mainstream) more "user-friendly". If VMware was *that* user-friendly, than this page wouldn't exist. That page represents a heroic effort of people who were unable to install or run VMware out of the box. I'm quite sure that page is very popular even today, timestamps on the files in that directory reveal that fact.

VMware inserts proprietary binary modules into your kernel, taints it in the process, then you lose support from the community. If anything goes wrong (and I've seen my share of "oopses" from the binary modules), you're out of luck. Some people don't care much about that fact, I do.

I presume we'll have to wait and see for ourselves who wins the virtualization wars on Linux, eventually. If it's about winning at all?

Just for your info..
VMWare makes it illegal to perform benchmarks and publish them (read the eula)

Sad, but true. But, thanks for the info.

I've now removed all data I had collected and published about VMware's performance. All that effort in vain... :(

You know, a vendor can write an EULA for their product stating you have to give them your firstborn child if they wish, but that doesn't mean it's a legally binding contract. You can't sign away your basic rights, and I think performing and publishing benchmarks would definitely be something they have no legal authority to prohibit you from doing.

This is not true anymore: the latest EULAs in VMware's products have lifted this restriction.

Sorry, but the specific VMware product I benchmarked had that clause in EULA, I checked after I was warned about the fact that publishing benchmark results is forbidden.

But, if what you're saying is true, that's good news. I've always hated products that you can test but are not allowed to share your results with others. And when I found out that VMware is among them, that lowered it's value in my eyes.

"KVM is a completely different project (AFAIK, not connected in ANY way to VMware)"

This is true. But it uses the same desigas VMware as been ing for 9 years in their hosted products. Talking about this design at length without even mentioning VMware is weird.

"If VMware was *that* user-friendly, than this page wouldn't exist. That page represents a heroic effort of people who were unable to install or run VMware out of the box. I'm quite sure that page is very popular even today, timestamps on the files in that directory reveal that fact."

Do you realize that this page is maintained by Petr Vandrovec, who is a VMware employee wing at their headquarters in Palo Alto?

Oh the irony...

Load of turd. I could also say that you copied me because you can walk. The concept of virtualization and emulation has been around a long long time. Remember the... wait a second. How old are you?

Last week I tried KVM. I got 60x slower performance then native. I was so disenchanted.
But, it now looks like KVM has gained a massive performance boost.

Check out this thread...

Here is a snippit...


Here are some quick numbers. Context-switch overhead with lmbench lat_ctx -s 0 [zero memory footprint]:

#tasks native kvm-r4204 kvm-r4232(mmu)
2: 2.02 180.91 9.19
20: 4.04 183.21 10.01
50: 4.30 185.95 11.27

so here it's a /massive/, almost 20 times speedup!

Context-switch overhead with -s 1000 (1MB memory footprint):

#tasks native kvm-r4204 kvm-r4232(mmu)
2: 150.5 1032.97 295.16
20: 216.6 1020.34 393.01
50: 218.1 1015.58 2335.99[*]

the speedup is nice here too. Note the outlier at 50 tasks: it's consistently reproducable. Could KVM be thrashing the pagetable cache due to some sort of internal limit? It's not due to guest size

The -mmu FC6 guest is visibly faster, so it's not just microbenchmarks that benefit from this change. KVM got /massively/ faster in every aspect, kudos Avi! (Note that r4204 already included the interactivity IRQ fixes so the improvements are i think purely due to pagetable caching speedups.)

I am lucky enough to have the QX6700 Core 2 Quad Extreme overclocked to 3.0 GHz. After fussing with the networking for three days, I got Windows XP SP2 (32 bit) running under Qemu/KVM on 64 bit Feisty. With the VT emulation support on the Core 2 Quad Extreme, Windows XP runs very fast indeed.

I loaded Rhapsody and am playing tunes while doing development on Microsoft Visual Studio .NET 2005. This is about as badly as I can beat this. It is running just fine. I gave it 1 GB of memory and a 100 GB disk.

The improvements that I would like to see in hardware and software are more support for devices. I think this will require an IO memory management unit in the CPU's but I am not sure. I do not have access to the fancy graphics card or the fancy sound card. I have an SB 16 and some generic video card in XP.


I don't get it. If I want user friendly VT-based virtualization, I use parallels. KVM is a waste of time.

Don't worry, new versions of various distributions will have KVM compiled as a module. So you'll just apt-get install kvm (or whatever you do on your favorite distribution to install new packages). And voila, you'll have instant virtualization framework to play with.

This article was written even before KVM was released (on a pre-release kernel).

I don't get it. If people don't want to pay for software, why don't they just steal it instead of using FOSS.

I recommend anyone using FOSS because it doesn't cost them money to just steal software and stop complaining about FOSS.

Do as I do, not as I say.

Its easy to steal software, but it feels better to not have to. I'm literally happier using FOSS.

I understand why no one else replied to your comment, but I give my opinion freely.

We are trying to make the world a better place and you down us. You discuss me. Most of the software (proprietary) are knock offs of FOSS, just as well, have fun when you have a problem, see how fast it gets fixed.

Thx, Great Article.

KVM and JeOS rocks in my machine :)



Can someone provide me some scripts to get the benchmarks of KVM guest?

Thanks in advance,