
Using a larger MTU can lessen the CPU load on hosts. To enable jumbo frames you need to increase the MTU on all devices that make up the network path from the source of the traffic to it’s destination. A Jumbo Frame is a layer 2 ethernet frame that has a payload greater than 1,500 bytes. If the variable doesn’t exist, you can add it in.īy default, an ethernet MTU (maximum transmission unit) is 1,500 bytes. The variable that needs to be changed is alescingScheme(where X is the number of the desired NIC). This is done by changing a configuration parameter on your virtual machine. Disable VMXNET3 virtual interrupt coalescing for the desired NIC.Set the host’s power policy to Maximum performance:.Use VMXNET3 virtual network adapters whenever possible.There are a number of recommendations when running virtualised workloads that are highly sensitive to network latency. Running Network Latency Sensitive Applications The change will not take effect until the virtual machine has been restarted. If the ethernetX.emuRxMode variable isn’t there then you can add a new row: On the options tab, click Configuration Parameters (found under the General section). To change this setting using the vSphere client, select the virtual machine then click Edit settings. Setting ethernetX.emuRxMode = “0” will disable SplitRx on an adapter, whilst setting ethernetX.emuRxMode = “1” will enable it. vmx file, where X is the ID of the virtual network adapter. It can be enabled on a per NIC basis by using the ethernetX.emuRxMode variable in the virtual machines. SplitRx mode can only be configured on VMXNET3 virtual network adapters, and is disabled by default. This feature can improve network performance for certain types of workloads such when multiple virtual machines on the same host are receiving multicast traffic from the same source. SplitRx mode allows a host to use multiple physical CPUs to process network packets received in a single network queue. This reduces the CPU cost normally associated with emulated or para-virtualised network devices. For example, in relation to networking, DirectPath I/O can allow a virtual machine to directly access a physical NIC. DirectPath I/O allows guest operating systems direct access to hardware devices. Like NIOC, I have written about DirectPath I/O previously here, so will keep this section brief in this post. Once you have defined your resource pools you can then assign them to port groups. Bandwidth can be allocated to resource pools using shares and limits.

You can also create user defined resource pools.

There are a number of pre-defined resource pools: NIOC allows the creation of network resource pools so that you can better manage your host’s network bandwidth. I’ve written this previous post on NIOC, so won’t go into too much detail here. The vSphere 5 best practices performance guide covers a few topics in relation to tuning host network performance, some of which I’ll briefly cover in this post aimed at covering the VCAP-DCA objective of the same name.
