VMware vSphere 5.1 is anounced. Available from september 12th 2012. What’s New?

NOTE: I’m still  digging through the ‘what’s new’ documents, so I’ll probably be adding info to this document in the coming days.

This post is summing up all the notable changes and new features in vSphere 5.1 for future reference. VM:

  • 64 vCPU is now possible (up from 32 in 5.0)
  • VM version 9. Now supports GPU acceleration (Only usable in VMware View) on selected nVidia GPU’s
  • New feature: Guest OS Storage Reclamation (Only usable in VMware View)
  • New feature: VHV (virtualized hardware virtualization) to run VM’s inside of VM’s
  • A shift from using virtual hardware versioning to ‘Compatibility levels’

Platform:

  • Upgrading to VM version 9 doesn’t require the VM to be down
  • Shell activity is no longer logged as root, but as the logged on user
  • New feature: Support for SNMPv3
  • vMotion and svMotion can now be combined in one action
  • Windows 8/2012 are supported
  • Piledriver, Sandy Bridge E and Ivy bridge CPUs are now supported

Network:

  • dvSwitch supports LACP
  • New feature: Network Health Check
  • New feature: Configuration Backup & Restore
  • New feature: Roll Back and Recovery
  • SR-IOV support

Availability:

  • vMotion is now possible on non-shared storage
  • vMotion and svMotion can now be done simultaneously (“Unified” vMotion)
  • New feature: vSphere Data Protection. Agentless VM to disk backup with dedup
  • New feature: vSphere Replication. Replication on VM level over SAN or WAN
  • After installing VMware Tools 5.1 (and reboot), no more reboots are needed with future installs of VMware Tools

Security:

  • New feature: VMwarevShield Enpoint is now included

Automation:

  • sDRS and Storage Driven Profiles are now integated in VMware vCloud Director (5.1?)
  • vSphere Auto Deploy has 2 new method for deploying vSphere hosts: Stateless Caching and Statefull Installs
  • Up to 80 concurrent hosts boots are now supported by the Auto Deploy server

Management:

  • vSphere Web Client is greatly improved
  • New feature: vCenter Signle Sign-on
  • vCenter Orchestrator has newly designed workflows and can now be launched by the vSphere Web Client

Licensing:

  • vSphere licensing is back to CPU sockets instead of a specific amounts of vRAM per CPU socket. (Thanks Michael)

Storage:

  • Max number of hosts that can share a read-only file is now 32 (up from 8 in vSphere 5.0)
  • New feature: Virtual disk type SE sparse disk. Used to enable wipe/UNMAP free disk space initiated from within the VM (VMware Tools). HW version 9 required. VMware View only.
  • Grain size of VMDK is now tunable, but not by users =) Default size is still 4KB. Redo  logs still user 512B.
  • Improved APD and PDL handling: Misc.APDHandlingEnable, Misc.APDTimeout, disk.terminateVMOnPDLDefault, das.maskCleanShutdownEnabled
  • Extends detection to PDL for iSCSI arrays with single LUN per target
  • Booting from software FCoE adapter is now supported
  • 16Gb speed support for 16Gb FC HBA’s
  • vCloud Director 5.1 can now also make use of VAAI NAS primitives (using a plugin from the vendor) to off-load the creation of linked clones
  • New namespaces in esxcli (esxcli storage san)for troubleshooting
  • New feature: Smartd daemon. Collects SMART info from disks. Can only be used from esxcli.
  • SOIC will now calculate the latency threshold from the 90% throughput value (90% of peak value)
  • SOIC will now be enabled in stats only mode by default
  • When involving vCloud Director linked clones, sDRS will now not recommend placing them on datastores that do not contain the base disk or a shadow vm copy of the base disk
  • The datastore correlation detector now uses the I/O injector, so no VASA support is needed anymore
  • New SIOC metric: VmObservedLatency
  • Support for4 parallel disk copies per svMotion operation to distinct datastores
  • Jumbo frame support for all iSCSI adapters

Duncan Epping posted the links to the ‘what’s new’ documents. If you want to read them yourself, click here.

Install the Dell Compellent VAAI plugin on your ESXi 4.1 hosts using the VMware vMA

Somewhere since version 5.5.x of the Dell Compellent Storage Center software, 2 VAAI primitives, ‘Zero Block’ and ‘UNMAP’, are supported in ESX(i). In ESXi 5, VAAI will be active out of the box and these 2 primitives will be working without any action needed. In ESXi 4.1, you’ll have to install a plugin to get the ‘Zero Block’ VAAI primitive working. The ‘UNMAP’ primitive will be available in ESXi 5 only. From Storage Center 6, the ‘Full Clone’ and ‘ATS’ primitives will also be supported for both ESXi versions.

Because I’m switching from thin provisioned to thick provisioned eager zeroed disks and therefor will be inflating a lot of VMDK’s, the ‘Zero Block’ VAAI primitive will potentially save me alot of waiting time. When creating new VMDKs there will also besignificant time savings.  The reason for switching to thick disks is because we have thin provisioning on the Dell Compellent SAN.

Using thick disks instead of thin disks will therefor cost no extra space but on the other hand, will improve performance because ESXi will not issue a SCSI lock when the disks has to be increased in size. Datastores can’t be over provisioned anymore, so there is also one less thing to monitor as a sysadmin.

Let’s get things going. First download the VAAI plugin from the Dell Compellent site (you’ll have to have an account). I recommend you read the included PDF file. This includes usefull install information. Now log into the vMA.

If you want to see the current VAAI status of your ESXi machine, execute the command:

esxcli –server <server FQDN> –username root vaai device list

You’ll probably see nothing returned. If you execute the command

esxcli vaai device list

on the ESXi server, you’ll probably get similar results as the screenshot below

‘Unknown’ means ‘VAAI is supported on your storage array but not enabled in ESXi’.

Use the command

esxcli –server <server FQDN> –username root corestorage claimrule list –claimrule-class=VAAI

on thevMA to show all VAAI plugins. DELL_VAAIP_COMPELLENT should not be present. If it is, first uninstall it! See the PDF for instructions.

Next, upload the Dell Compellent VAAI plugin dell-vaaip-compellent-1-offline_bundle-518180.zip to the vMA. I use WinSCP. Log into the vMA and issue this command to make sure the plugin is compatible and not yet installed:

vihostupdate –server <server FQDN> –username root –scan –bundle ./dell-vaaip-compellent-1-offline_bundle-518180.zip

The result should be something like this

Install the bundle using the command

vihostupdate –server <server FQDN> –username root –install –bundle ./dell-vaaip-compellent-1-offline_bundle-518180.zip

Reboot the server using the command

vicfg-hostops –server <server FQDN> –username root –operation reboot

After the reboot, execute the following commands to enable and load the VAAI plugin

esxcli –server <server FQDN> –username root corestorage claimrule add –claimrule-class=Filter –plugin=VAAI_FILTER –type=vendor –vendor=COMPELNT –autoassign
esxcli –server <server FQDN> –username root corestorage claimrule add –claimrule-class=VAAI –plugin=DELL_VAAIP_COMPELLENT –type=vendor –vendor=COMPELNT –autoassign
esxcli –server <server FQDN> –username root corestorage claimrule load –claimrule-class=Filter
esxcli –server <server FQDN> –username root corestorage claimrule load –claimrule-class=VAAI
esxcli –server <server FQDN> –username root corestorage claimrule run –claimrule-class=Filter

You shouldn’t receive any errors

Now verify the plugin is working by using the command

esxcli –server <server FQDN> –username root vaai device list

again. If everything went as it should, you’ll end up with something like this

Use the command

esxcli –server <server FQDN> –username root corestorage claimrule list –claimrule-class=VAAI

to show all VAAI plugins. DELL_VAAIP_COMPELLENT should now be present.

 

To be sure, reboot your ESXi server and check again. The plugin should still be in use.

This concludes our exercise =)