Create a VM and do some configuration with PowerCLI

Like so many others, I’ve been trying to up my PowerCLI skills lately. I do this to make my life and that of my colleagues a bit easier and to get the skills needed to up my certification to the VCAP level 🙂

Here’s some code to create a 64-bit Oracle Enterprise Linux VM with virtual hardware version 9 and without floppy. I’ll try to update this post as more settings must or can be made.

Create the VM with 7 disks.

$name = “MyVMName”
New-VM -VMHost MyESXiHost -CD -Name $name -MemoryMB 16384 -NumCPU 1 -Version v9 -GuestId oracleLinux64Guest -Datastore VMFS3 -DiskGB 10,20,30,200,200,200,10 -DiskStorageFormat Thin

Some settings which  are annoying to do with a mouse and also easily forgotten

$vm = Get-VM $name
New-AdvancedSetting -Entity $vm -Name ‘vcpu.hotadd’ -Value ‘true’ -Confirm:$false
New-AdvancedSetting -Entity $vm -Name ‘mem.hotadd’ -Value ‘true’ -Confirm:$false
New-AdvancedSetting -Entity $vm -Name ‘svga.autodetect’ -Value ‘true’ -Confirm:$false

This is some code to do the same if the settings have already been configured

$vm | Get-AdvancedSetting -Name ‘vcpu.hotadd’ | Set-AdvancedSetting -Value ‘true’ -Confirm:$false
$vm | Get-AdvancedSetting -Name ‘mem.hotadd’ | Set-AdvancedSetting -Value ‘true’ -Confirm:$false
$vm | Get-AdvancedSetting -Name ‘svga.autodetect’ | Set-AdvancedSetting -Value ‘true’ -Confirm:$false

Change NIC and SCSI controller type

Get-VM $vm | Get-CDDrive | Set-CDDrive -ISOPath “[VMFS1] ISO\oel_6.4_boot.iso” -StartConnected $true -Confirm:$false
Get-VM $vm | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName Vlan126 -Confirm:$False
Get-VM $vm | Get-NetworkAdapter | Set-NetworkAdapter -Type VMXNet3 -Confirm:$False
Get-VM $vm | Get-ScsiController | Set-ScsiController -Type ParaVirtual

Add the last 4 disks to a seperate SCSCI controller

$disks = Get-Harddisk -VM $vm | Select -Last 4
New-ScsiController -Type ParaVirtual -Harddisk $disks

Start the VM

Start-VM $vm

That’s it for now. Comments are welcome.


Remediate entity error: Host cannot download files from VMware vSphere Update Manager patch store. Check the network connectivity and firewall setup, and check esxupdate logs for details.

While trying to remediate an ESXi server I got the following error:

Host cannot download files from VMware vSphere Update Manager patch store. Check the network connectivity and firewall setup, and check esxupdate logs for details.

A quick cat of /var/log/esxupdate.log revealed what was going on


After changing the config to hold the correct IP addresses of the DNS servers all went fine 🙂

VMware Auto Deploy error SSLError: [Errno 336265225] _ssl.c:337: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib

After upgrading to vCenter 5.1 Update 1a I decided to take a look at the Auto Deploy component. Since upgrading to vCenter 5.0 it wasn’t working. Because we didn’t plan on using it, I just disabled the service at that time. After the improvements to Auto Deploy in 5.1 especially the Stateless Caching mode, this technique  might be something we want to use in the future.

Enabling the service resulted in the same error as before. The service starts, but the vSphere Client plugin cannot connect and displays the error ‘The request failed because of a connection failure. (Unable to connect to the remote server)’


I verified that there were no listeners on port 6501 and 6502 through a netstat -a on the Autodeploy server.


I browsed to VMware KB 2000988 to start my quest for a solution.

I opened the vmconfig-autodeploy.xml from %ProgramData%\VMware\VMware vSphere Auto Deploy\ and verified all paths mentioned were correct. I also checked the settings in HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware vSphere Auto Deploy with regedit. The only thing I changed in the registry and the XML file was the username. I changed it from the Down-Level Logon Name [NETBIOS DOMAIN]\[USER] to the User Principal Name [USER]@[DNS DOMAIN].

A reboot of the service didn’t change the situation.

Because I didn’t do any configuration, I tried to re-register the service and got an error thrown at me, yay!


I opened the log file location %ProgramData%\VMware\VMware vSphere Auto Deploy\logs and immediately saw that a lot of logging was being created. A good thing in it self and a clear indication something is wrong 🙂


Opening one of the log files revealed that same error

SSLError: [Errno 336265225] _ssl.c:337: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib

and another one

Error: [(‘PEM routines’, ‘PEM_read_bio’, ‘no start line’), (‘SSL routines’, ‘SSL_CTX_use_certificate_file’, ‘PEM lib’)]


So, it seems to be a certificate issue. A Google search revealed nothing to further help me.

I opened the location of the SSL certificates %ProgramData%\VMware\VMware vSphere Auto Deploy\ssl and there was the root of the problem.


Because we still use the default installer certificates, these were quickly replaced with copies from e.g. the %ProgramData\VMware\VMware ESXi Dump Collector\ssl folder.

I tried to re-register the service again, but was thrown another error:

error: attempt to write a readonly database


Again, because there was no configuration done, I simple renamed the db file in the Data directory and ran the command again. A new db file was created automatically.


The command did throw another error or warning,

‘openssl’ is not recognized as an internal or external command, operable program or batch file.

but it did finish successfully.


I restarted the Auto Deploy service and was able to connect my vSphere Client. All good!


Remove host from cluster error “Cannot remove the host because it’s part of VDS dvSwitch”

I reveived an error while trying to remove a host from a cluster.

Cannot remove the host [hostname] because it’s part of VDS [dvSwitch name]




This error is correct 🙂 As you can probably see by it’s name, this host is connected to a dvSwitch called dvSwitch-vMotion. Obviously it’s used for vMotion. To be able to remove the host from the cluster I had to disconnect it from the dvSwitch.

Press Ctrl-Shift+N and you get to the networking part of your vCenter Server Inventory. Select your dvSwitch



and select the Host tab



Now select the host you want to remove and right-click on it. Select the ‘Remove from vSphere Distributed Switch’ option



Read the following error very closely. You could put yourself in trouble if you disconnect the wrong port-groups



If you have decided it’s safe to remove the host anyway, you will most probably receive the following error



vDS [dvSwitch name] port [port number] is still on host [hostname] connected to [hostname] nic=vmk1 type=hostVmknic

This error is also correct 😉 You have not disconnected the virtual adapter from the portgroup!

Go back to your host. Select the Configuration tab and go to the Networking config. Select vSphere Distributed Switch. Here you can see the virtual adapter that’s still connected to the portgroup.



Select Manage Virtual Adapters. You’ll see a list of the connected adapters. I’m my case it’s only 1. Select the adapter you want to remove and click Remove. Again, think a second about the warning.



Now you’re able to disconnect the server from the dvSwitch. Press Ctrl-Shift+N again and remove the host from the vSphere Distributed Switch. The remove process will disconnect your dvUplinks for you and your physical adapters will be free for other use.



And now you’re able to remove your host from the cluster.



Congrats, you did it! Have a good one.

Set the Path Selection Policy for every device path of every host in your vSphere 5.0/5.1 cluster using PowerCLI

I created a new cluster with 4 hosts and forgot to set the default Path Selection Policy (PSP). To fix this I’ll use PowerCLI. In my case I want to use the PSP RoundRobin for all my device paths to disks. Change the PSP according to your preferrence.

First get all device paths that do not use the Round Robin PSP:

Get-Cluster [ClusterName] | Get-VMHost | Get-ScsiLun -LunType disk | Where {$_.MultipathPolicy -ne “RoundRobin”}

which gave me this output:

CanonicalN ConsoleDeviceName LunType CapacityMB MultipathPolicy
———- —————– ——- ———- —————
naa.600… /vmfs/devices/disks/naa.600… disk 768000 Fixed
naa.600… /vmfs/devices/disks/naa.600… disk 768000 Fixed
naa.600… /vmfs/devices/disks/naa.600… disk 768000 Fixed
naa.600… /vmfs/devices/disks/naa.600… disk 768000 Fixed
naa.600… /vmfs/devices/disks/naa.600… disk 768000 Fixed

After this, just set the PSP for those device paths with the command:

Get-Cluster [ClusterName] | Get-VMHost | Get-ScsiLun -LunType disk | Where {$_.MultipathPolicy -ne “RoundRobin”} | Set-ScsiLun -MultipathPolicy “RoundRobin”

Which gave me what I wanted:

CanonicalN ConsoleDeviceName LunType CapacityMB MultipathPolicy
———- —————– ——- ———- —————
naa.600… /vmfs/devices/disks/naa.600… disk 768000 RoundRobin
naa.600… /vmfs/devices/disks/naa.600… disk 768000 RoundRobin
naa.600… /vmfs/devices/disks/naa.600… disk 768000 RoundRobin
naa.600… /vmfs/devices/disks/naa.600… disk 768000 RoundRobin
naa.600… /vmfs/devices/disks/naa.600… disk 768000 RoundRobin

Now to prevent future misconfigured PSP’s, I want to set the default PSP for my default Storage Array Type to Round Robin. I’ll use the following script to get the available SATP’s of all my hosts (normally, they should be the same on all hosts):

Get-Cluster [ClusterName]| Get-VMHost | ForEach-Object {
$esxcli = Get-EsxCli -VMHost $_

Output (of 1 host):

PowerCLI D:\get_all_satp_available.ps1

DefaultPSP Description Name
———- ———– —-
VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_MSA
VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_ALUA
VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP
VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_SVC
VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL
VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV
VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA
VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_ALUA_CX
VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM
VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_CX
VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI
VMW_PSP_RR Supports non-specific active/active … VMW_SATP_DEFAULT_AA
VMW_PSP_FIXED Supports direct attached devices VMW_SATP_LOCAL

VMW_SATP_DEFAULT_AA is what SATP is in use. I’ll use that one to set the default PSP on.

And a list of all available PSP’s:

Get-Cluster [ClusterName]| Get-VMHost | ForEach-Object {
$esxcli = Get-EsxCli -VMHost $_

Output (of 1 host):

PowerCLI D:\get_all_psp_available.ps1

Description Name
———– —-
Most Recently Used Path Selection VMW_PSP_MRU
Round Robin Path Selection VMW_PSP_RR
Fixed Path Selection VMW_PSP_FIXED

PowerCLI D:\>

In my case, I want to make VMW_PSP_RR the default PSP for SATP VMW_SATP_DEFAULT_AA. So I use the script to set it:

Get-Cluster [ClusterName]| Get-VMHost | ForEach-Object {
$esxcli = Get-EsxCli -VMHost $_

And that gives mw what I want. Because there are 4 hosts in my cluster, the output is repeated 4 times:

PowerCLI > D:\set_all_satp_to_psp.ps1
PowerCLI > D:\

Have a good one 🙂

List all physical QLogic HBA’s driver and firmware information of all your ESXi 4.1/5.0 hosts using PowerCLI 5.0.1

I wanted to know the version of the QLogic HBA driver and firmware of all ESXi hosts. I took the script I used earlier to list all NIC info and edited it. I ended up with this:

$ExportFilePath = “C:\HostHBAsInfo.csv”
$PuttyUser = “root”
$PuttyPwd = “password”
$Plink = “D:\plink.exe”
$PlinkOptions = ” -v -pw $PuttyPwd”
$RCommand0 = “cat /proc/scsi/qla2xxx/“
$ESXHosts = Get-VMHost | Sort Name
$Report = @()
ForEach ($ESXHost in $ESXHosts) {
$pHbas = $ESXHost | Get-VMHostHBA -Type FibreChannel
$x = 3
ForEach ($pHba in $pHbas) {
$Message = “”
$HostInfo = {} | Select HostName,ESXVersion,Cluster,pHba,FCFirmwareVersion,DriverVersion,BIOSVersion,FCODEFirmware,EFIFirmware,FlashFirmware,ISP
$HostInfo.HostName = $ESXHost.Name
$HostInfo.ESXVersion = $ESXHost.Version
$HostInfo.Cluster = (Get-Cluster -VMHost $ESXHost.Name).Name
$HostInfo.pHba = $pHba.Model
Write-Host “Connecting to: ” $ESXHost.Name -ForegroundColor Green
$Command = $Plink + ” ” + $PlinkOptions + ” ” + $PuttyUser + “@” + $ESXHost.Name + ” ” + $RCommand0 + $x
$Message = Invoke-Expression -command $command
$HostInfo.FCFirmwareVersion = ($Message[1] -split “FC Firmware version “)[1]
$HostInfo.DriverVersion = ($Message[1] -split “Driver version “)[1]
$HostInfo.BIOSVersion = ($Message[5] -split “BIOS version “)[1]
$HostInfo.FCODEFirmware = ($Message[6] -split “FCODE version “)[1]
$HostInfo.EFIFirmware = ($Message[7] -split “EFI version “)[1]
$HostInfo.FlashFirmware = ($Message[8] -split “Flash FW version “)[1]
$HostInfo.ISP = ($Message[9] -split “ISP: “)[1]
$Report += $HostInfo
$Report = $Report | Sort-Object HostName
IF ($Report -ne “”) {
$Report | Export-Csv $ExportFilePath -NoTypeInformation
Invoke-Item $ExportFilePath

Please note the value of $x is the indexed value of your vmhba device. In my example the 2 HBA ports are the 4th and 5th overall storage devices.

To enable/disable the TSM-SSH service on each host, I used the following PowerCLI onliners (Thanks again Alan):

Get-VMHost | Sort Name | Get-VMHostService | Where { $_.Key -eq “TSM-SSH”} | Start-VMHostService

Get-VMHost | Sort Name | Get-VMHostService | Where { $_.Key -eq “TSM-SSH”} | Stop-VMHostService

Update/Change your ESXi hosts DNS IP address settings with PowerCLI

Long time since last blogpost. Reason: Our third child was born. And it runs with higher priority then work related stuff =)

Anyways, moving right along….

During a domain upgrade we introduced 2 new DC’s in our AD domain that will take over the DNS server role from the old DC’s. Hence all servers need to be updated with the new DNS server IP addresses including the ESXi hosts.

This proved to be relatively easy. I updated all ESXi hosts (managed with vCenter Server) with the command:

Get-VMHost | Get-VMHostNetwork | Set-VMHostNetwork -DnsAddress [DNS1 IP address],[DNS2 IP address]

Test it first on one server by specifying it:

Get-VMHost -Name [FQDN of ESXi host] | Get-VMHostNetwork | Set-VMHostNetwork -DnsAddress [DNS1 IP address],[DNS2 IP address]

You can also change other parameters, like the Domain and SearchDomain

Get-VMHost | Get-VMHostNetwork | Set-VMHostNetwork -DnsAddress [DNS1 IP address],[DNS2 IP address] -Domain [Domain name] -SearchDomain [Search domain name]

Easy! PowerCLI FTW!


VMware vMA resxtop error: Login failed, reason: Success

Working with resxtop in the vMA today and mistyping a servername created this funny error:

[root@vmware-vma vi-admin]# resxtop –server blabla.lan –username root
root@blabla.lan’ password:
Login failed, reason: Success
[root@vmware-vma vi-admin]#


I just wanted to share this with you =)

VMware vSphere 5.1 is anounced. Available from september 12th 2012. What’s New?

NOTE: I’m still  digging through the ‘what’s new’ documents, so I’ll probably be adding info to this document in the coming days.

This post is summing up all the notable changes and new features in vSphere 5.1 for future reference. VM:

  • 64 vCPU is now possible (up from 32 in 5.0)
  • VM version 9. Now supports GPU acceleration (Only usable in VMware View) on selected nVidia GPU’s
  • New feature: Guest OS Storage Reclamation (Only usable in VMware View)
  • New feature: VHV (virtualized hardware virtualization) to run VM’s inside of VM’s
  • A shift from using virtual hardware versioning to ‘Compatibility levels’


  • Upgrading to VM version 9 doesn’t require the VM to be down
  • Shell activity is no longer logged as root, but as the logged on user
  • New feature: Support for SNMPv3
  • vMotion and svMotion can now be combined in one action
  • Windows 8/2012 are supported
  • Piledriver, Sandy Bridge E and Ivy bridge CPUs are now supported


  • dvSwitch supports LACP
  • New feature: Network Health Check
  • New feature: Configuration Backup & Restore
  • New feature: Roll Back and Recovery
  • SR-IOV support


  • vMotion is now possible on non-shared storage
  • vMotion and svMotion can now be done simultaneously (“Unified” vMotion)
  • New feature: vSphere Data Protection. Agentless VM to disk backup with dedup
  • New feature: vSphere Replication. Replication on VM level over SAN or WAN
  • After installing VMware Tools 5.1 (and reboot), no more reboots are needed with future installs of VMware Tools


  • New feature: VMwarevShield Enpoint is now included


  • sDRS and Storage Driven Profiles are now integated in VMware vCloud Director (5.1?)
  • vSphere Auto Deploy has 2 new method for deploying vSphere hosts: Stateless Caching and Statefull Installs
  • Up to 80 concurrent hosts boots are now supported by the Auto Deploy server


  • vSphere Web Client is greatly improved
  • New feature: vCenter Signle Sign-on
  • vCenter Orchestrator has newly designed workflows and can now be launched by the vSphere Web Client


  • vSphere licensing is back to CPU sockets instead of a specific amounts of vRAM per CPU socket. (Thanks Michael)


  • Max number of hosts that can share a read-only file is now 32 (up from 8 in vSphere 5.0)
  • New feature: Virtual disk type SE sparse disk. Used to enable wipe/UNMAP free disk space initiated from within the VM (VMware Tools). HW version 9 required. VMware View only.
  • Grain size of VMDK is now tunable, but not by users =) Default size is still 4KB. Redo  logs still user 512B.
  • Improved APD and PDL handling: Misc.APDHandlingEnable, Misc.APDTimeout, disk.terminateVMOnPDLDefault, das.maskCleanShutdownEnabled
  • Extends detection to PDL for iSCSI arrays with single LUN per target
  • Booting from software FCoE adapter is now supported
  • 16Gb speed support for 16Gb FC HBA’s
  • vCloud Director 5.1 can now also make use of VAAI NAS primitives (using a plugin from the vendor) to off-load the creation of linked clones
  • New namespaces in esxcli (esxcli storage san)for troubleshooting
  • New feature: Smartd daemon. Collects SMART info from disks. Can only be used from esxcli.
  • SOIC will now calculate the latency threshold from the 90% throughput value (90% of peak value)
  • SOIC will now be enabled in stats only mode by default
  • When involving vCloud Director linked clones, sDRS will now not recommend placing them on datastores that do not contain the base disk or a shadow vm copy of the base disk
  • The datastore correlation detector now uses the I/O injector, so no VASA support is needed anymore
  • New SIOC metric: VmObservedLatency
  • Support for4 parallel disk copies per svMotion operation to distinct datastores
  • Jumbo frame support for all iSCSI adapters

Duncan Epping posted the links to the ‘what’s new’ documents. If you want to read them yourself, click here.

Install the Dell Compellent VAAI plugin on your ESXi 4.1 hosts using the VMware vMA

Somewhere since version 5.5.x of the Dell Compellent Storage Center software, 2 VAAI primitives, ‘Zero Block’ and ‘UNMAP’, are supported in ESX(i). In ESXi 5, VAAI will be active out of the box and these 2 primitives will be working without any action needed. In ESXi 4.1, you’ll have to install a plugin to get the ‘Zero Block’ VAAI primitive working. The ‘UNMAP’ primitive will be available in ESXi 5 only. From Storage Center 6, the ‘Full Clone’ and ‘ATS’ primitives will also be supported for both ESXi versions.

Because I’m switching from thin provisioned to thick provisioned eager zeroed disks and therefor will be inflating a lot of VMDK’s, the ‘Zero Block’ VAAI primitive will potentially save me alot of waiting time. When creating new VMDKs there will also besignificant time savings.  The reason for switching to thick disks is because we have thin provisioning on the Dell Compellent SAN.

Using thick disks instead of thin disks will therefor cost no extra space but on the other hand, will improve performance because ESXi will not issue a SCSI lock when the disks has to be increased in size. Datastores can’t be over provisioned anymore, so there is also one less thing to monitor as a sysadmin.

Let’s get things going. First download the VAAI plugin from the Dell Compellent site (you’ll have to have an account). I recommend you read the included PDF file. This includes usefull install information. Now log into the vMA.

If you want to see the current VAAI status of your ESXi machine, execute the command:

esxcli –server <server FQDN> –username root vaai device list

You’ll probably see nothing returned. If you execute the command

esxcli vaai device list

on the ESXi server, you’ll probably get similar results as the screenshot below

‘Unknown’ means ‘VAAI is supported on your storage array but not enabled in ESXi’.

Use the command

esxcli –server <server FQDN> –username root corestorage claimrule list –claimrule-class=VAAI

on thevMA to show all VAAI plugins. DELL_VAAIP_COMPELLENT should not be present. If it is, first uninstall it! See the PDF for instructions.

Next, upload the Dell Compellent VAAI plugin to the vMA. I use WinSCP. Log into the vMA and issue this command to make sure the plugin is compatible and not yet installed:

vihostupdate –server <server FQDN> –username root –scan –bundle ./

The result should be something like this

Install the bundle using the command

vihostupdate –server <server FQDN> –username root –install –bundle ./

Reboot the server using the command

vicfg-hostops –server <server FQDN> –username root –operation reboot

After the reboot, execute the following commands to enable and load the VAAI plugin

esxcli –server <server FQDN> –username root corestorage claimrule add –claimrule-class=Filter –plugin=VAAI_FILTER –type=vendor –vendor=COMPELNT –autoassign
esxcli –server <server FQDN> –username root corestorage claimrule add –claimrule-class=VAAI –plugin=DELL_VAAIP_COMPELLENT –type=vendor –vendor=COMPELNT –autoassign
esxcli –server <server FQDN> –username root corestorage claimrule load –claimrule-class=Filter
esxcli –server <server FQDN> –username root corestorage claimrule load –claimrule-class=VAAI
esxcli –server <server FQDN> –username root corestorage claimrule run –claimrule-class=Filter

You shouldn’t receive any errors

Now verify the plugin is working by using the command

esxcli –server <server FQDN> –username root vaai device list

again. If everything went as it should, you’ll end up with something like this

Use the command

esxcli –server <server FQDN> –username root corestorage claimrule list –claimrule-class=VAAI

to show all VAAI plugins. DELL_VAAIP_COMPELLENT should now be present.


To be sure, reboot your ESXi server and check again. The plugin should still be in use.

This concludes our exercise =)

%d bloggers like this: