On my desktop computer I have VirtualBox, and i can run a lot of concurrent VMs, up to near native speed.
On my server, that is twice as powerful than my desktop computer, I have debian+VMware server 1.0 (because I don't like the java bloat introduced with 2.0), and if I run a single VM, it runs up to near native speed.
The real bottleneck is disk access speed: if I start TWO (yes, just 2!) VMs at the same time (read: when the server will turn on), the server will be paralyzed for 40 minutes! 40 minutes for booting 2 Windows VMs! Completely useless!
I had better performance when I installed VirtualPC on a Celeron 400 Mhz!!!!
If I search for "vmware slow hdd access", I get tons of results, so, I assume this is an huge VMware problem, right?
So I was thinking one of this actions:
- Replace the server HDD with two SSDs in RAID 0
- Switch to Proxmox VE
Someone tried Proxmox? How better it is? Will it fix the bottleneck?
I don't have another spare server to experiment with, so, if I wipe my server to play with proxmox, I will lose at least 2 working days...
Answer
Well, you might not believe it, but i wiped my server (it was only 4 days old, so there was no important data yet) and installed the Proxmox VE distribution (Debian 5.0 + Qemu-KVM + OpenVZ)
Wow! It is extremely faster than VMware on Debian!!!
There is a difference, now i explain:
VMware is good in managing RAM, the unused RAM of my VM was left free for the other VM. But, the IO will make the VM "hang up", waiting for the emulator writing to the hdd. So, if your VM are using the HDD, unless you have a RAID 0+1 set or a physical HDD for each VM, you will be disappointed by performance.
Instead, qemu-kvm won't share the unused ram between the hosts, or it does a lot more ineffectively than VMware (as i saw from the web-ui of both emu), but, i think that qemu will cache the IO on ram and then write to the hdd later. (in the web-ui there is a % indicator "IO delay: 5%") The performance gain are really better!
No comments:
Post a Comment