I can say that at work we have 105 guests running on 10 VMWare 3i hosts. All using NFS stores. It works great! two of the NFS servers are high performance NetApp and BlueArc NAS. One is a server class P4 running ubuntu and the kernel NFS server. All are on 1Gig ethernet.<br>
<br>Among the many things I have not yet done is benchmark IO on these platforms.<br><br><div class="gmail_quote">On Tue, Mar 31, 2009 at 10:50 AM, mike havlicek <span dir="ltr"><<a href="mailto:mhavlicek1@yahoo.com">mhavlicek1@yahoo.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
Hello,<br>
<br>
I have been poking around with the idea of virtualization. I have an IBM<br>
eServer xseries 360 that I have been playing with VMware ESX 3i demo. The immediate problem that I see is how to bare metal restore a geust. I suspect that add on products handle everything one could wish to do.<br>
<br>
The scenario I am anticipating is having to rebuild the hypervisor itself<br>
say on a new set of disks and then reloading the guests. I figure with the<br>
VMware products there is a clean way to do this sort of thing at full purchase prices.<br>
<br>
What I am wondering is what other alternative products folks have experience with handling this sort of rebuild and any suggestions for hosting a "hypervisor" on this IBM server. I am toying with the idea of using NFS mounted disk space from a Solaris 9 server to store VMs. I have not yet looked into how this would work with Xen. I do suspect that any<br>
hypervisor running under redhat or a derivative would require a non redhat<br>
kernel on this hardware. In theory the NFS mounted space works OK with ESX 3i, although taking the mounts offline from the NFS server threw a monkey<br>
wrench in things. (I did stop the VMs that were stored on the NFS mounts, prior to unsharing but I didn't put the hypervisor in maintenance mode (and don't know if that would have circumvented those VMs becoming unknown). But I get ahead of myself ... and further ahead what about SAN ... (I don't know when I will have my home SAN running :)<br>
<br>
Thanks,<br>
<br>
Mike<br>
<br>
<br>
<br>
_______________________________________________<br>
clue-tech mailing list<br>
<a href="mailto:clue-tech@cluedenver.org">clue-tech@cluedenver.org</a><br>
<a href="http://www.cluedenver.org/mailman/listinfo/clue-tech" target="_blank">http://www.cluedenver.org/mailman/listinfo/clue-tech</a><br>
</blockquote></div><br>