Connect your ESXi 4 host to an HP 4800G stack with a trunk/aggregation link

Underlying blog post is wrong. I wasn’t fully up to date with the technologies. I would like to remove the post, but will keep it here to prevent others from making the same mistake.

I was enlightened by a very experienced network engineer. He explained to me how teaming/bonding/trunking works on the Layer2 and Layer3 level. On the L2 level it boils down to communication between 2 MAC addresses. You can easily trunk between 2 switches because each switch has 1 MAC address. It becomes more difficult when you want to trunk between a switch and multiple NIC’s (e.g. an ESX(i) server). Because every NIC has it’s own MAC address, the L2 type of trunking, won’t work any more. This is where the virtual switch come in to the picture.
The physical NIC’s of the ESX(i) host (vmnic’s) are connected to the vSwitch. Then you set the vSwitch to do the ‘trunking’ (Load balancing policy IP Hash). Now, on the physical switch, you need no configuration.

For the same reasons you cannot connect a bond to 2 separate switches. A bond pairs 2 or more NIC’s on the OS level. On the switch level you’ll have to create a trunk/aggregation link, hence this will never work with 2 separate switches. The obvious solution to this is to stack your switches (SMLT), thus making it 1 logical switch (with 1 MAC address).

I want to note that you can do LACP trunking with some blade enclosures. They have a build-in switch, thus giving it one MAC address externally. On the inside of the blade enclosure, some type of bonding is done.


A few quick notes to successfully connect your ESXi 4 host to a nHP 4800G switch or stack using a trunk/aggregation interface. First thing to note is that I don’t use the word LACP. That’s because ESXi doesn’t support it. Read up here and here. That said, I come to the following notes:

  1. Don’t use LACP. No link-aggregation mode dynamic.
  2. Don’t use a hybrid port link-type. You can only use a static trunk type so use link-type trunk
  3. Set the pvid vlan to the management VLAN.
  4. Set the Management Network on the vSwitch to VLAN 4095 (All)
  5. Set the  vSwitch Load Balancing policy to Route based on IP hash (you probably already knew this)
  6. Through an SSH connection (or on the console), use the command
    vim-cmd hostsvc/net/vswitch_setpolicy --nicteaming-policy=loadbalance_ip vSwitch0

    to set the policy again.

  7. Through an SSH connection (or on the console), use the command
    vim-cmd hostsvc/net/portgroup_set --nicteaming-policy=loadbalance_ip vSwitch0 "Management Network"

    to set the policy on the Management Network port group.

  8. First set all needed config to the switch port, then apply the aggregation config. So add the command ” port link-aggregation group <group number>” as the final command to all ports.
For me, these 2 vim-cmd commands were needed to get things working. That was probably because I was troubleshooting and used a lot of config settings so that might have confused the vSwitch in some way. The commands come from this KB article.
Here are the config examples:
interface Bridge-Aggregation1
 description <ESXi host>
 port link-type trunk
 port trunk permit vlan 10 20 30 40 100 300
 port trunk pvid vlan 100

interface GigabitEthernet1/0/1
 description <ESXi host>
 port link-type trunk
 port trunk permit vlan 10 20 30 40 100 300
 port trunk pvid vlan 100
 broadcast-suppression pps 3000
 undo jumboframe enable
 port link-aggregation group 1

Good luck!


About Yuri de Jager
Technology Addict

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: