[clue] VMWare question (and VMware Server EOL)

Jim Ockers ockers at ockers.net
Thu Nov 8 11:15:25 MST 2012


Hi Crawford,

I thought I'd share a few remarks to follow up on this, based on my 
experiences with ESX in the last few days.

Crawford Rainwater wrote:
> Jim:
>
> I have not had "hands on" with ESX/ESXi5 so I will remark on ESX/ESXi4.  
>
> Yes, there is the VMware "vendor lock in" angle vs. KVM is Open Source (under RedHat) and community oriented.  ESXi is CLI oriented and if you need a GUI, you have to go with (and pay for) the vSphere line which required a Microsoft Windows based client as a host.  There was also a limitation of ESXi on CPUs and RAM that can be on the "bare metal" as well as per virtual machine.  VMware will only support certain guest OS's as well, so there is that potential issues of being further vendor locked in there as well.
>   
We're using ESXi5 and I'm not familiar with ESXi4. The vSphere client 
(for Windows only) seems to work great and none of us has paid anything 
for it so far. There is a CLI but we generally use the GUI for 
everything except maybe for copying ISO images to a datastore. As far as 
I know ESXi5 doesn't have any limitations (that we've run into, anyway) 
for CPUs and RAM. We're using it in bare metal mode to get a few more 
years of usefulness out of some 8-way Xeon Apple Xserve servers that 
were really expensive back in the day and are tricked out with lots of RAM.

The CLI could be interesting but the GUI is much more efficient for me 
personally to get any work done.

Every guest OS we've tried (CentOS 6, Windows 2003, Windows XP, Windows 
7) has worked OK, I haven't run into limitations. I think we aren't 
using the VMware tools on any of them.

There is a pass-through mode for iSCSI targets which will get past the 
vendor lock-in aspect of a VMFS5 formatted filesystem. That said, the 
VMDKs for startup disks do need to be on a VMFS5 (or VMFS3) formatted 
datastore. The secondary/tertiary disks (which in our case are 
individual iSCSI targets) can be formatted however the guest OS wants, 
because the block device is passed directly through from the ESX iSCSI 
initiator into the guest OS as a plain disk, which it can format however 
it wants.

The workflow for moving a VM between ESX servers manually is pretty easy 
and quick, since all our storage and all VMware datastores are on iSCSI 
targets from an OpenFiler. We haven't invested in the vMotion wizzy 
stuff that lets you migrate VMs without shutting them down, because we 
don't feel we need it at this point.
> With KVM one can go for CLI (I like "virsh" personally) or GUI oriented.  No limitations on the CPUs or RAM amounts for the guests save what the "bare metal" can have.  Various ways to configure the hard disks to mimic the VMware's methodology.  Same with vMotion and the power management (which require ESX or vSphere level licenses).  No lock in with various guest distributions, however those with support (i.e., Microsoft) will not always support their systems as guest, save SUSE and RHEL on the "bare metal" are two exceptions I recall where support is available.
>
> In the end, the cost benefit and effectiveness of being able to do the pretty much the same tasks that ESX/vSphere + additional modules can provide using KVM with some proper scripts and planning weigh more towards KVM in my book.
>   
I'll have to look into KVM. I think that AceNet (www.ace-host.net) must 
be doing KVM. I'm playing with a virtual server there.

Regards,
Jim

--
Jim Ockers, P.E., P.Eng. (ockers at ockers.net)
Contact info: http://www.ockers.net/



More information about the clue mailing list