Tuning Configurations
- Use Network I/O Control to utilise Limits, Shares and Qos priority tags for traffic
- Team NICs across PCI cards and switches for complete redundancy
- Using vDS switches gives you more features than the Standard Switch and minimises configuration time
- Utilise NIC Teaming where possible to provide failover and extra bandwidth
- Use Jumbo Frames where you can – MTU 9000 rather than 1500. Must be set the same end to end
- Keep physical NIC firmware updated
- Use VMXNET-3 Virtual Network Adapters where possible. Must be supported by the O/S you are running. Shares a ring buffer between the VM and VMKernel and uses zero copy which saves CPU cycles. Takes advantage of transmission packet coalescing to reduce address space switching
- DirectPath I/O may provide you a bump in network performance, but you really need to look at the use case. You can lose a lot of core functionality when using this feature, such as vMotion and FT (some special exceptions when running on UCS for vMotion) so you really need to look at the cost:benefit ratio and determine if it’s worth the tradeoffs
- Enable Discovery Protocols CDP and LLDP for extra information on your networks
- Make sure your NIC teaming policies on your Virtual switches match the correct policies on the physical switches
- Make sure your physical switches support cross stack etherchannel if you are planning on using this in a fully redundant networking solution
- Use static or ephemeral port bindings due to the deprecation of Dynamic Binding
- Choose 10GB ethernet over 1GB. This gives you Netqueue, a feature which use multiple transmit and receive queues to allow I/O processing across multiple CPUs
- Choose physical NICs with TCP Checksum Offload which reduces the load on the physical CPU by allowing the NIC to perform checksum operations on network packets
- Choose physical adapters with TCP Segmentation offload as this can reduce the CPU Overhead involved with sending large amounts of TCP traffic.
- To speed up packet handling, network adapters can be configured for direct memory access to high memory/ This bypasses the CPU and allows the NIC direct access to memory
- You can use DirectPath which allows a VM to directly access the physical NIC instead of using an emulated or paravirtual device however it is not compatible with certain features such as vMotion, Hot Add/Hot Remove, HA, DRS and Snapshots
- Use Split RX Mode on VMXNet-3 adapters is an ESXi feature that uses multiple physical CPUs to process network packets received in a single work queue. It is individually configured on each NIC. Good for Stock Exchanges and Multimedia companies
- Use VMCI if you have 2 VMs on the same host which require a high-speed communication channel which bypasses the guest or VMKernel networking stack
- In a native environment, CPU utilization plays a significant role in network throughput. To process higher levels of throughput, more CPU resources are needed. The effect of CPU resource availability on the network throughput of virtualized applications is even more significant. Because insufficient CPU resources will limit maximum throughput, it is important to monitor the CPU utilization of high-throughput workloads.
Leave a Reply