Relevant mostly to OS X admins
I happened to have a 15″ Retina, 2 Thunderbolt<->Ethernet adapters, a Synology DS1812+ NAS, a stack of 3 Dell 5548P switches, few extra minutes, and some curiosity at my disposal today, so I decided to see what sort of real-world numbers I could push between these 2 devices over 2 bonded Ethernet connections.
First up: the MBP. To create a Virtual Bonded Interface, start by plugging in the Thunderbolt-Ethernet adapters- the OS will need to see the interfaces present before they can be bonded. Then, from the Network System Preferences, click on the [+] at the bottom, and select Manage Virtual Interfaces:
In the new window, click [+] again, and select New Link Aggregate…
Select the Ethernet interfaces to bond, and give them a name.
Before these are connected to the network, you need to configure a pair of switch ports to also participate in the bond. In my Dells, that’s configured via Switching: Link Aggregation: LAG Membership, where LAG stands for Link Aggregation Group. Enter the Edit mode (vs the Summary they start in), and indicate which ports will be in your currently indicated LAG. Click first in the LAG row to bring them into the group, then the LACP row above it. After saving the changes, you should be able to connect both Ethernet connections to your specified switch ports, and bring up a bonded connection.
Finally, we configure the NAS to do 802.3ad. In the Synology, that’s configured via Control Panel: Network: Network Interface: Create. Choose the default 802.3ad link aggregation, both interfaces, any VLAN options, and apply. A couple of pings will drop, but the new bonded connection will be set up. Attach both Ethernet cables to the LAG2 configured ports, and it should be back on your network, with twice the data path.
To measure the value of LACP, I tested some Finder file copies to and from the NAS. I picked a 12 Gig folder of .DMG files- many large files, as to have minimal file creation overhead. Naturally, it helps to have fast disks at both ends- the MBP’s SSD can keep up far faster than this, and it looks like the NAS configuration of 8 drives with dual parity can also saturate 2 GigE links.
The writes aren’t quite as steady as I’d hoped for, but holding in the 180+MB/sec range for network writes from a laptop is pretty nifty. There’s also the benefit of redundancy- one side of the bond can be lost (cable out, switch down (presuming the LAG spans multiple switches)), and the network connection isn’t broken. And, if multiple, non-LAG clients are addressing the NAS, it does a great job of providing high performance to both clients.
The price you pay in this configuration is in flexibility- you can’t just plug in one of these 2 adapters to an arbitrary network port like you can out of the box- those 2 Tbolt<->GigE adapters are now part of that bond, not separate devices to be used as single links. You’ll need a 3rd adapter if you wanted the ability to have both the high performance bond AND a standard, not-bound-to-the-LACP-configured-ports Ethernet connection- perhaps a “at my desk” vs “anywhere else” use case.
I’ve realized my testing methods above are flawed. I only used the data from Activity Monitor to answer the question “how fast are we transferring data?”, the answer above is 194MB/sec.
This value is not a measure of true throughput. I revisited this topic today, timing how long a specific file transfer took with 2 bonded interfaces and an LACP configuration on my switch, vs how long it took using a single port, no bonding. Answer: exactly the same file transfer speeds. Despite Activity Monitor (now on 10.11.3) still reporting 2x the transfer rate under the bond, the actual time to transfer a 1.5GB .dmg was unchanged by bonding ports. Sorry to say that despite the visual feedback in Activity Monitor, port bonding cannot double your transfer rates from a single source.
So what is it good for? Maximizing performance from a server to multiple clients. In this case, if you’re using a MBP as a server… well, more power to ya.