Improve VMware vMotion performance to more than 2 Virtual Machines at a time

Are you also a proud Cloud Admin? Bragging about those mighty ESX hosts at family parties? Are you also so annoyed when you have to wait for a machine to go into maintenance mode and only 2 VM’s vMotion simultaneously?

Yawn.

Luckily here’s a way to improve the VMotion performace of your cluster. Open C:\ProgramData\VMware\VMware VirtualCenter\vpdx.cfg in Notepad++ or any other serious text editor (… that makes a backup copy automatically. Otherwise make a backup yourself first). Paste the following lines between the <vpdx> and </vpdx> XML tags:

<ResourceManager>

<maxCostPerHost>24</maxCostPerHost>

</ResourceManager>

Save it and restart the VMware VirtualCenter Server service. Done.

 

Install VMware ESX 4.0 on a Supermicro X8DTN+ server with JMicron PCI-e-to-SATAII controller

While trying to install ESX 4.0 Update 2 the old fashioned way (with a DVD) on a new Supermicro X8DTN+ based server a was confronted with the lack of a JMicron driver for the PCI-e-to-SATAII controller that my DVD-ROM drive was attached to. How annoying.

Instead of wasting my time searching for an ESX driver (disc), I opted to install ESX through USB. I downloaded UNetbootin and wrote the ESX ISO to a 2GB SanDisk Cruzer Blade USB stick.

After the image was written, I still had to adjust the setup to ask me for the installation media. If you don’t, the installer will assume you use a DVD-ROM drive to install. I opened syslinux.cfg on the root of the USB drive and added ‘askmedia’ to the end of every install command.

I plugged the USB drive into the server and started the install. It seemed to hang at the point when the storage-drivers module was loaded, but after a few minutes the install continued. During the installation I got the option of choosing my installation media. Everything went smooth from there.

My whitebox ESX cluster – part 1

I wanted to get my own ESX cluster for the obvious reasons. At the time de Compaq Evo Desktop D510 Small Form Factor was pretty popular. The Intel 845 chipset though, only supports 4GB RAM. A serious limitation, so I resumed my quest for the perfect ESX whitebox. I soon learned the Intel 965 chipset supports 8GB RAM. Together with the possibility to run on Core2 Duo CPU’s it was evident that this was my chipset of choice. I sticked with the SFF enclosure so a cluster would take less space and used less electricity.

I ended up with the HP Compaq dc7700 SFF PC with E6400 CPU’s. I managed to get 4 of those and because DDR2 RAM was cheap at that time I put 4x2GB sticks in all of them.

For the LAN part, I would rely on the on board Intel 82566DM NICs. For the iSCSI SAN connection I bought 5x PCIe GBit Intel 82574L NIC’s and used the software adapter.

At that time vSphere came out with official support for the Intel ICH9 southbridge. Hoorah!

I bought some new cheap hardware for the SAN, but FreeNAS and OpenFiler didn’t support the integrated graphics in the chipset yet, so I ended up using Windows Storage Server 2008 with the Microsoft iSCSI Software Target software. Very much NOT ideal, but it was and still is stable.

When vSphere 4.1 came out, I upgraded without any problems and gained additional hardware sensors. Sweet.

All running pretty stable now. Of course there is the disk I/O bottleneck. And eventually even the 32GB RAM will fill up. Running 25+ servers on 8 CPU cores also has it’s bad times.

I would like to put some Q6600 or QX6800 into the boxes at some time. But the problem is how to find 4 of them :-/ I also have my eyes on a 24 ports GBit switch with trunking possibilities, but that will also take some more time.

Overall, it’s my perfect whitebox ESX cluster =)