Link Aggregation w/ LACP
Link aggregation is not a new concept, yet I still see a lot of folks who don’t make regular use of it. With regard to server networking architecture, especially in heavily virtualized or highly available environments, it’s a crucial tool that provides redundancy, and slightly increased throughput. For how simple it is to implement, it is a no-brainer to consider for physical server networking.
RedHat covers its configuration nicely in this document for RHEL6 and it’s flavours. They even provide a configuration helper app that guides you through the configuration options, and provides a working configuration for you at the end. You can either copy in the interface files they provide, or run a script that will implement the changes you’ve selected.
To gain the most benefit from LACP, you’ll want your switch (or better yet – switches) to support it. However, you can still gain from enabling it in a single-switch environment. That’s the configuration that I’m going to lay out here. I’ve got two onboard NICs in my physical CentOS 6.5 server, and I’m going to bond them together using the linux bonding driver, in mode 4, which will implement LACP. Here are my interface files:
server:/etc/sysconfig/network-scripts$ cat ifcfg-eth0 DEVICE=eth0 ONBOOT=yes NM_CONTROLLED=no SLAVE=yes MASTER=bond0 server:/etc/sysconfig/network-scripts$ cat ifcfg-eth1 DEVICE=eth1 ONBOOT=yes NM_CONTROLLED=no SLAVE=yes MASTER=bond0 server:/etc/sysconfig/network-scripts$ cat ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=static NM_CONTROLLED=no IPADDR=10.0.8.4 NETMASK=255.255.255.0 GATEWAY=10.0.8.1 DNS1=10.0.8.1 BONDING_OPTS="mode=4 miimon=100"
Now with a service network restart, I will have enabled LACP and have bonded these two physical NICs into one link aggregate or channel bond. Should one NIC or its cable fail, the other will automatically take over. You can use the iperf tool to compare throughput/bandwidth performance between LACP-enabled and disabled configs.