NIC teaming is an advanced server configuration in Windows Server that allows multiple network interface controllers (NICs) to control traffic on a single physical Ethernet connection.
NIC teaming is a feature of Windows Server 2019 that allows for load balancing and redundancy. This article will explain the basics of how to configure NIC teaming in Windows Server 2019.
Would you prefer your servers to be online at all times, even if a core swap fails at work? Do you want your server to be able to access various VLANs without relying on the drivers of a single vendor? Do you want to make it easier to configure your networking for physical and virtual hosts in your environment? This post is for you if you responded yes to any of these questions. Allow NIC Teaming to handle it.
NIC teaming may help your computers be more fault tolerant, better use network resources (load balancing), setup VLANs for the machine, and simplify networking settings.
In this post, you’ll discover what NIC teaming is, how to use it in Virtual Machines, and how to install it in your company.
Prerequisites
To understand NIC teaming, you don’t need to be an expert in Windows or Network Administration. To comprehend and use this material, you will need a few technical and knowledge Requirements. These are the items:
- Understanding the workings of networks (mac addresses, IP addresses, VLANs)
- With two or more network adapters, access to Windows Server 2016 (or 2019) — Almost everything you see here is also applicable to Windows Server 2016, 2012R2, and 2012.
NIC Teaming: An Overview
You establish fault tolerance in a typical networking arrangement for a physical server by connecting numerous network connections from the server into multiple physical switches (possibly part of a single switch stack). As a result, the server is always active with several IP addresses, and load balancing is non-existent.
You may retain a connection to numerous physical switches while only using a single IP address by establishing a NIC team on your server. Load balancing becomes more accessible. Instead of waiting for DNS records to timeout or update, fault tolerance is instantaneous, and administration is simplified.
NIC teaming is a Windows Server feature that enables you to organize NICs into “teams.” Each team is made up of one or more team members (NICs) and one or more virtual NICs that are ready for usage.
The network adapters that the team uses to interact with the switch are known as team members. The virtual network adapters that are produced when you form a team are known as team interfaces. Because the team interfaces are assigned an IP address, it might be difficult to remember which is whose.
When it comes to NIC teaming and bonding, what’s the difference?
NIC Bonding and NIC Teaming are interchangeable terms.
Starting with Windows Server 2012, NIC teaming is supported in all versions of Windows Server. This functionality is incredibly versatile, making Link Aggregation/Load Balancing, Failover, and Software Defined Networking more easier for administrators (VLANs).
Similar methods exist on specific hardware from certain suppliers; however, Microsoft’s version of NIC teaming seeks to offer these functionality regardless of hardware or vendor.
When it comes to NIC teaming and bridging, what’s the difference?
You may use NIC Teaming to construct a NIC interface that spans one or more NIC devices on the same network. NIC Bridging enables communication between two subnets by pairing NIC adapters from distinct subnets.
You’ll configure the Teamwork Mode, Mode of Load Balancing, Standby adapter, and Team interface VLAN while creating a NIC Team. Each of these elements is described in detail below.
Teamwork Mode
When you create a NIC Team, you must select what Teamwork Mode to use. The Teamwork Mode determines how the server and switch(es) will split traffic between the multiple links. There are three types of Teamwork Modes: Independent of the switch, LACP, and Static.
Independent of the switch
Independent of the switch teaming allows you to connect team members to multiple, non-stack switches. The Independent of the switch mode is the only Teamwork Mode that uses no configuration changes on the switches that you connect it to. This mode only uses MAC addresses to control what interface incoming data should be sent to.
There are a few situations where you may choose to use Independent of the switch Teamwork Mode like. This could be when:
- You won’t be able to modify the settings of your linked switches.
- team members utilizing multiple non-stack switches
- Creating a NIC Team in a Virtual Machine (more on that in the Considerations for Use in Virtual Machines section below)
If you prefer to use one adapter for traffic and only fail-over to a standby adapter during physical link failure, you must use Independent of the switch Teamwork Mode and configure a Standby adapter.
Because it limits the overall bandwidth available for connecting with the server, a standby adapter is seldom utilized. “None (all adapters Active)” is the default configuration.
Static
Static teaming necessitates manually configuring the NIC team’s ports on the switch into a Link Aggregation Group (LAG). The server and switch will distribute traffic over all active connections.
Traffic will be divided along this connection that isn’t expecting it if a port on each end is linked to another device. As a result, it is useless for isolating issues such as poorly connected wires.
You should only use the Static Teamwork Mode when your switches cannot support LACP.
Protocol for Link Aggregation Control (LACP)
LACP teaming is similar to static teaming in that it ensures that each active cable in the connection is attached to the correct LAG. Data will not be sent across connections that aren’t linked to the anticipated LAG via LACP.
When you wish to make the switch aware of the NIC team in order to load balance data sent to the NIC team, you should utilize LACP.
Important: Static and LACP Teamwork Modes require you to connect the host to only a single switch or a single switch stack.
Mode of Load Balancing
Mode of Load Balancing determines how the team will present interfaces for incoming data and determine what adapters to use for outgoing data. The available options are Hash of the Address, Hyper-V Adapter, and Dynamic.
In contrast to a “load balancing appliance,” incoming traffic will not be uniformly distributed across team members’ links.
Hash of the Address
Hash of the Address mode will attempt to use the source and destination IP addresses and ports to create an effective balance between team members. If no ports are part of a connection, it will only use IP addresses to determine how to load balance. For cases that there’s no IP addresses are part of a connection, it will use MAC addresses.
You may require a NIC team to utilize IP+Port, IP just, or MAC address only when forming it. The default is IP+Port, which offers the finest balance among team members. You’ll need to utilize PowerShell to construct your NIC team if you just want to use IP or MAC addresses.
While the Hash of the Address Mode of Load Balancing does a good job at splitting outbound traffic between team members, it is unable to adapt to over/under-utilized team members. Also, all inbound traffic uses the MAC address of the primary team interface. This though is limited to a single link if using the Independent of the switch Teamwork Mode.
You must use Hash of the Address when creating a team inside of a virtual machine.
Hyper-V Adapter
Hyper-V Adapter mode is intended only for use on Hyper-V virtual machine hosts. This mode will assign a MAC address to each machine on the virtual machine host and then assign a team member to each of the MAC addresses. This allows for a specific VM to have a predictable team member under normal operation.
Predictable team members for each VM means that a VM’s bandwidth is limited to the max of the single link that it operates over. When a Hyper-V host has few VMs on it, using the Hyper-V Adapter Mode of Load Balancing is likely to not be very well-balanced.
You normally don’t need to use Hyper-V Adapter mode, but may find it beneficial if you must make sure that each VM uses the same link at all times.
Dynamic
Dynamic mode uses the best features from Hash of the Address and Hyper-V Adapter modes to balance the outbound and inbound network traffic. Like Hyper-V, inbound traffic is split by assigning team members to different MAC addresses. Like Hash of the Address, outbound traffic is split by a combination hash derived from IP/Port. This mixture provides better balancing compared to either of the above methods.
One significant benefit of dynamic balancing mode is dynamic traffic monitoring. When the dynamic mode algorithm detects that some team members are over- or under-utilized, it re-balances outgoing traffic to other team members as appropriate.
TCP streams have a natural cadence that makes it possible to predict future traffic amounts/breaks in the TCP; Microsoft calls these flowlets. Dynamic Mode of Load Balancing can also anticipate, through flowlets, what team members will become over/under-utilized and re-balance outbound traffic to prepare.
The optimum load balancing choice is nearly always dynamic mode.
The VLAN for the Team Interface
When you create a team, by default, it will create a single team interface. The team interface has a VLAN setting to tag traffic on an interface to a specific VLAN. Setting The VLAN for the Team Interface to a tagged VLAN is typically only done when the team members that the NIC Team is made from use the ‘trunk’ mode.
You may construct additional team interfaces on various VLANs once you’ve created the team.
Warning: Setting the VLAN for an interface within a VM is not recommended by Microsoft. To define VLANs for a VM, use the Hyper-V switch advanced option “VLAN ID.”
Virtual Machine NIC Teaming
Setting up NIC teams in a virtual machine has various drawbacks. Microsoft warns that utilizing “Teams on Teams” (host-level teams and VM-level teams) might be unreliable and result in connection loss.
Previously, achieving fault tolerance for a VM required connecting the VM to many external virtual switches. To avoid overcrowding, you had to plan which VMs would share each virtual switch. The prospect of congestion from other VMs on the host compounded load balancing concerns even further.
Nowadays, you may use a single network adapter for each VM and install a NIC team on the VM host. When a physical port or switch fails, all VMs achieve complete fault tolerance. For significantly better overall throughput and congestion management, all VMs’ traffic may be balanced between team members. This is how your setup may appear right now:
A topology for a NIC Team
In certain cases, NIC teaming in a VM is used to enable SR-IOV in order to reduce the networking stack’s CPU overhead. For SR-IOV to operate, you’ll also need BIOS and NIC support.
Requirements
To be considered a “supported configuration,” using NIC teaming inside a VM must meet the following criteria:
- In the VM, you must be utilizing several adapters.
- The adapters must be connected to two “external” type virtual switches.
- Switches must be on the same L2 subnet when connecting to physical switches.
- The VM NIC Team mode must be set to Independent of the switch and the Mode of Load Balancing must be set to *Hash of the Address.*
In Hyper-V, you must additionally activate NIC Teaming for each VM that will be part of a team from the Advanced Features tab of each network adapter. You can see an example of this setting in the image below.
Hyper-V option for NIC teaming
Increasing Efficiency
While NIC Teaming offers decent out-of-the-box performance, there are a number of circumstances where you may need to work on boosting performance. The intricacies of these situations are beyond the scope of this text, however if you want to understand more about how to perform, concentrate on the phrases below:
- Direct Memory Access from afar (RDMA)
- Embedded Teaming (Switch) (SET)
- Scaling on the Receive Side (RSS)
- I/O Virtualization with a Single Root (SR-IOV)
In general, these extra options lower the networking stack’s CPU cost and connection latency. More information on enhancing performance may be found in the articles Software and Hardware technologies described and Higher performance with RDMA with SET.
Using Windows Server to Create a NIC Team
You should be ready to form a NIC team now that you understand how NIC teaming works and have a vision for streamlining your networking!
Due to demo environment limits, a NIC team will be built on a VM for this example. The processes for setting up a NIC team on a physical server are the same, but if any VM-specific actions are required, these will be included.
How can you tell whether NIC teaming is turned on?
Since 2012, NIC teaming has been supported in all versions of Windows Server (Server 2012, 2012R2, 2016, and 2019).
Using the GUI to set up NIC teaming
Open Server Manager on a Windows Server system to get started. Make sure you’re connected to the computer where you want to set up the NIC team.
- To begin, right-click the server name you want to build a NIC team for and choose Configure NIC Teaming.
In Server Manager, enable the NIC Teaming option.
2. Select the NICs to add to the new team from the Adapters and Interfaces panel. Then, right-click on the adapters you want to add to a new team and choose Add to New Team.
In Server Manager, there is an option to add a new team.
Note: NIC Teaming may be configured with any number of adapters (up to 32) and one or more team interfaces.
3. To create the team, provide a descriptive Team name and adjust Additional Properties as appropriate, then click OK.
In this example, the NIC team is being set up on a VM. As a result, the Teamwork Mode or the Mode of Load Balancing cannot be used. If this demo were on a physical server, you’d probably use Independent of the switch or LACP if using a LAG on the switch.
Dialog box for a new team
On the Windows Server, the NIC team should now be established.
Increasing the NIC Team’s NICs or Interfaces
Once created, you can add NICs to a configured team from the same NIC Teaming window. To do so, right-click on an available NIC and select Add to Team “<Team name>”.
Add the “Demo” option to your team.
You can also add more interfaces to a team by selecting the Team Interfaces tab and then click TASKS —> Add Interface as shown below.
Option to add an interface
When the input box appears, enter in the VLAN you want to use, as well as a name if desired, as seen below.
Dialog box for the new Team interface
Using Windows PowerShell to Set Up NIC Teaming
Let’s look at how to set up a NIC team using PowerShell now that you know how to do it using the GUI.
Identifying NIC Names
You must first pick which NICs will be included to the squad. You’ll need to find out the NIC names in particular.
Use the Get-Adapter cmdlet to get the NIC names. When you run this cmdlet, take note of the choices that appear, as shown below.
PowerShell cmdlet Get-Adapter
After you’ve written down the names, you may use PowerShell to form the team! For this demonstration, we’ll use the Ethernet 3 and 4 NICs for the new NIC Team.
Putting Together the NIC Team
You just need to execute one more cmdlet (New-NetLbfoTeam) now that you have the adapter names. The output of the New-NetLbfoTeam cmdlet is shown in the sample below.
You’ll use the names of the NICs you got before for the TeamMembers argument.
The TeamingMode is set to SwitchIndependent in this example. You’ll probably want to utilize the LACP value if you’re arranging the switch ports into a LAG. If your switch doesn’t have a LAG, you’ll generally want to utilize SwitchIndependent.
Because this option delivers the most equitable load balancing among team members, the LoadBalancingAlgorithm parameter value of Dynamic is employed.
New-NetLbfoTeam -TeamMembers <NIC Names> -Name “<Desciptive Name>” -TeamingMode SwitchIndependent -LoadBalancingAlgorithm TransportPorts
Using the PowerShell cmdlet New-NetLbfoTeam
After that, a new virtual NIC will show in the adapter list from Get-NetAdapter:
A new virtual NIC has been built.
Notes from the NIC Team on a VM:
– You must use the ‘SwitchIndependent’ TeamingMode. – You must use one of the Hash of the Address types for the LoadBalancingAlgorithm (TransportPorts). – Dynamic load balancing would be used on a physical server instead of TransportPorts.*
Increasing the NIC Team’s NICs or Interfaces
After you’ve created the NIC team, you can add NICs and interact with it in the same way that you would with the GUI. Use the Add-NetLbfoTeamMember cmdlet to add extra NICs to the team.
NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1 NIC1
You may also use the cmdlet Add-NetLbfoteamNIC to add extra team interfaces.
-Team Team1 -VlanID 42 -Add-NetLbfoTeamNIC
Summary
You now know what NIC Teaming is, how it affects performance/VM use/networking simplicity, and how to set it up using the GUI or PowerShell.
The “how to check nic teaming in windows 2012” is a tutorial on understanding NIC Teaming in Windows Server. It will teach you how to check the current configuration of your network interface card, whether it is teamed or not, and what the benefits and drawbacks are for both configurations.
Frequently Asked Questions
Related Tags
- how to configure nic teaming in windows server 2016
- windows 10 nic teaming
- what is nic teaming in vmware
- windows nic teaming modes
- nic teaming server 2012