Network teaming in RHEL7
If you’ve adopted or are just starting to read up on the new features included in Red Hat Enterprise Linux 7, you may have come across the new networking feature called teaming. It essentially is a replacement for bonding that offers more modularity, increased link monitoring features, higher network performance, and easier management of interfaces.
Habitually, network interface configuration is something I’m used to setting up once, and forgetting about it. I don’t usually re-visit my networking configuration unless some hardware changes, or I need to re-IP a system. With the advancement of SDN, this might change in the not-so-distant future, so I thought I’d give teaming a try. And hey, if it offers even marginally greater performance, why not get the most out of my OS?``
I first started with reading a comparison of Network Bonding to Network Teaming, and the new network teaming daemon “teamd”, with it’s concept of “runners”. Network teaming has basically assigned a daemon to a link aggregate that allows management and monitoring of that interface through the daemon. I guess this is similar to systemd modularizing init scripts, we’ve got a daemon wrapper that will manage the config files for us in a programmatic way. After going through the docs, I found it pretty easy to get this setup and running. If you’re lazy (read efficient) like me, there’s adequate example configs in /usr/share/doc/teamd*/example-ifcfgs/ that are easily modified (there’s one for LACP).
On one of my hypervisors, I’ve got 2 NICs, an LACP bond (will be replaced by an LACP team), and a bridge device. My new interface configs look like this:
$ cat ifcfg-eno1 DEVICE="eno1" HWADDR="70:71:bc:5c:bd:b9" DEVICETYPE="TeamPort" ONBOOT="no" TEAM_MASTER="team0" NM_CONTROLLED=no $ cat ifcfg-enp4s0 DEVICE="enp4s0" HWADDR="00:17:3f:d1:31:d8" DEVICETYPE="TeamPort" ONBOOT="no" TEAM_MASTER="team0" NM_CONTROLLED=no $ cat ifcfg-team0 DEVICE="team0" DEVICETYPE="Team" ONBOOT="yes" BRIDGE=br0 BOOTPROTO=none TEAM_CONFIG='{"runner": {"name": "lacp", "active": true, "fast_rate": true, "tx_hash": ["eth", "ipv4", "ipv6"]},"link_watch": {"name": "ethtool"},"ports": {"eno1": {}, "enp4s0": {}}}' $ cat ifcfg-br0 IPV6INIT=yes IPV6_AUTOCONF=yes BOOTPROTO=static NM_CONTROLLED=no IPADDR=192.168.122.10 NETMASK=255.255.255.0 GATEWAY=192.168.122.1 DNS1=8.8.8.8 DNS2=8.8.4.4 DEVICE=br0 STP=yes DELAY=7 BRIDGING_OPTS=priority=32768 ONBOOT=yes TYPE=Bridge DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=br0
The only real differences between bonding and teaming as far the config files are concerned is that there’s new device types called “TeamPort”, and “Team”, and then inside the team config there’s a JSON formatted entry that replaces the traditional bonding options. In here you define the runner (or method of link management) you want for your slave ports, the link monitoring tool, and which interfaces are slaves (ports) of the team. You can use the same bridge options with the team as you did the bond, that doesn’t change. One important rule to note though, bringing up the team interface won’t also bring up the slave interfaces, and in order to add slaves to a team, the slaves need to be in a link down state prior to addition to the team. This is the reason my ONBOOT=no for the NICs, and when systemd brings up the network, it will add the downed slaves to the team, then link up the slaves, then link up the team. I guess this is the modularity intent, let the slaves be managed on their own, then manage the team based on the slave behaviour.
After modifying my files as above, restarting the network service, waiting a few seconds for LACP negotiation to occur, my team is up and running. You can use the new teamdctl command to query and control the interfaces:
$ sudo teamdctl team0 state setup: runner: lacp ports: eno1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: aggregator ID: 2, Selected selected: yes state: current enp4s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: aggregator ID: 2, Selected selected: yes state: current runner: active: yes fast rate: yes
Nice! That was less daunting than I thought it might be. The team driver also has good integration with NetworkManager, should you chose to manage your team interfaces with the GUI.
Some others have done comparison benchmarking of bonding vs teaming. Although probably quite marginal for my use-case, the benchmarking shows additional bandwidth throughput, with reduced CPU load.