![]() ![]() By using one vmknic enabled for vMotion, vMotion is unable to load-balance at vmknic level and sends all the traffic to the single vmknic. The previous mentioned article explains why you should designate all vmnics as active in a LAG. As you are using a LAG and two NICs, only one vMotion VMkernel NIC should exist. Step 1: vMotion load balancing, this is done on the application layer and it is vMotion process that selects which VMkernel NIC is used. Load balancing done by the physical switch/LAG is the last element in this stack. Then the portgroup load balancing policy will make a decision followed by the physical switch. Duncan explains the impact when using Standby NICs in an IP-Hash configuration.Ī vMotion is initiated on host-level therefor the first load balancer that comes in to play is vMotion itself. When using a LAG, both vmknics need to be configured active, as the load balancing policy needs to be able to send traffic across both links. Each vmnic used by the VMkernel NIC (vmknic) is configured as active and both links are aggregated in a Link Aggregation Group (LAG) (EtherChannel in Cisco terms).įirst thing I want to clarify that the active/standby state of a vmknic is static and is controlled by the user, not a Load-Balancing policy. Lets assume you have two uplinks in your host, i.e. Please read the articles “ Choose link aggregation over Multi-NIC vMotion” and “ Designing your vMotion network” to review some of the requirements of Multi-NIC vMotion configurations Lets look at the starting point of vMotion traffic and how that impacts both the flow of packets and utilization of links. Pierre-Louis takes a bottom-up approach when reviewing the stack of virtual and physical load-balancing policies and although he is correct when stating that network load balancing is done independently from VMware’s network stack, it does not have the impact he thinks it has. ![]() ![]() If i'm using LAG, the main point to me is that load balancing is done independently from the embedded mechanism of VMware (Active/Standby for instance).ĭo you think that there is any issue on using LAG instead of using Active/Standby design with Multi-NIC vMotion? Do you feel that there is no interest on using LAG over Active/Standby (from VMware point of view and for hardware network point of view)? Most of the time, when I’m working on VMware environment, there is an EtherChannel (when vSphere < v5.1) with access datacenter switches that dynamically load balance traffic based on IP Hash. Is there any sense to you to use two uplinks bundled in an aggregate (LAG) with Multi-NIC vMotion to give on one hand more throughput to vMotion traffic and on the other hand dynamic protocol-driven mechanisms (either forced or LACP with stuff like Nexus1Kv or DVS 5.1)? Let me use this as an example and clarify how vMotion traffic flows through the stack of multiple load balancing algorithms and policies:Ī question relating to Lee’s post. Pierre-Louis left a comment that covers most of the questions. He is an author of the vSphere host and clustering deep dive series, podcast host for the Unexplored Territory podcast, and you can follow him on Twitter vMotion and EtherChannel, an overview of the load-balancing policies stackĪfter posting the article “ Choose link aggregation over Multi-NIC vMotion” I received a couple of similar questions. Frankdenneman Follow Frank Denneman is a Chief Technologist at VMware, primarily focusing on Machine Learning technology. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |