- How Many Mac Addresses Are Available For Virtual Network Adapters Created By Hyper-v
- How Many Mac Addresses Are Available For Virtual Network Adapters Created By Hyper-v By Default
This article describes how to plan your networking fabric in System Center - Virtual Machine Manager (VMM).
Networking components
In Hyper-V terminology, what do you call a virtual switch configured to provide connections external to the Hyper-V environment? How many MAC addresses are available for virtual network adapters, created by Hyper-V? Hyper-V Networking Best Practices – Configuration in Practice. If iSCSI and/or SMB connections are made through virtual adapters on a converged team, they will establish only one connection per unique IP address. Create multiple virtual adapters in order to enable MPIO and/or SMB multichannel.
For virtual network adapters, Hyper-V assigns a value up to 256 to create individual MAC addresses. What happens if this is not sufficient? When setting up virtual servers, you assign those virtual servers Ethernet adapters (just like a normal server) and give those Ethernet adapters TCP/IP addresses and Media Access Control (MAC) addresses. When setting up virtual network adapters, keep in mind that you can assign only one virtual network to a physical adapter. Mar 09, 2017 For a virtual machine to be highly available, all members of the Hyper-V cluster must be able to access the virtual machine state. This includes the configuration state and the virtual hard disks. This mode distributes the traffic based on the MAC address of the virtual network adapters. The algorithm uses round robin as the load-balancing.
VMM networking contains a number of components, summarized in the following table:
Networking component | Details |
---|---|
Logical networks | In VMM your physical networks are defined as logical networks. Logical networks are a useful way of abstracting your underlying physical network infrastructure. Logical network settings will match or mirror your physical network environment. For example the IP addresses and VLAN properties will match exactly, a network site in a logical network will contain configuration settings for the site. By default VMM creates logical ntwork automatically when you add a Hyper-V host to the fabric if a suitable network can't be found. You can disable this option. To abstract logical networks from the VMs that use them VMM provides VM networks. You connect the virtual adapter of a VM to a VM networks. |
MAC address pools | You can create MAC address pools for VMs running on virtualization hosts in the VMM fabric. When you use static MAC address pools VMM can automatically generate and assign MAC addresses to VMs. You can use a standard pool or configure a custom pool. |
Load balancers | VMM supports adding hardware load balancers or using NLB to load balance requests to a service tier |
VIP templates | Virtual IP (VIP) templates contain load balancing information for a particular type of traffic. For example you could have a template that specifies how to balance HTTPS traffic on a specific load balancer. |
Logical switches | Logical switches are containers for virtual switch settings. You apply logical switches to hosts so that you have consistent switch settings across all hosts. VMM tracks switch settings on hosts deployed with logical switches to ensure compliance. |
Port profiles | Port profiles act as containers for the properties you want a network adapter to have. Instead of configuring properties per network adapter you set up in the port profile and apply that profile to an adapter. There are two types of port profiles. Virtual port profiles contain settings that are applied to virtual network adapters connect to VMs or used by virtualization hosts. Uplink port profiles are used to define how a virtual switch connects to a logical network. |
Port classifications | Port classifications are abstract containers for virtual port profile settings. This abstraction means that admins and tenants can assign a port classification to a VM template, while the VM's logical switch determines which port profile should be used. rofiles and then both admins and tenant can select a suitable classification. VMM contains a number of default port classifications. For example there's a classification for VMs that need high bandwidth and a different one for VMs that need low bandwidth. Port classifications are linked to virtual port profiles when you configure logical switches. |
Plan logical networks
During deployment you'll need to create logical networks and set up network sites and IP addressing in each network. Then you'll create VM networks based on those logical networks.
Here's what you'll need to plan:
- Automation creation: Decide whether you want to let VMM create logical networks. VMM will automatically create a logical network each time you add a virtualization host. VMM doesn't create network sites in the automatically-created logical network. You can turn this option off in Settings > General > Network Settings and clear Automatic creation of logical networks.
- Logical network capacity: If you're going to create logical networks manually figure out what you'll need to represent your physical network topology. For example if you need a management network and a network used by VMs you should create two logical networks.
- Logical network types: Figure out the type of logical network you need. You'll configure VM networks on top of logical networks and those VM networks can provide network virtualization with the ability to crate multiple virtual networks on a shared physical networks, or VM networks can provide isolation with VLANS and PVLANS. When you configure the logical network you'll need to indicate the type of network you need.
- Network sites: Determine how many network sites you need in the logical network. You could plan around host groups and host locations. For example a Seattle host group and a New York host group. You don't need network sites if you don't have VLANs and you're using DHCP to allocate IP addresses.
- VLANs/subnets: Figure out the VLANs and IP subnets you need in the logical network. These will mirror what you have in your physical network topology.
- IP addressing: If you're using static IP address assignment determine which logical networks need static address pools.
Here's what you'll need to do:
- Identify baseline logical networks: Identify a set of initial logical networks that mirror the physical networks in your environment.
- Identify additional logical networks for specific requirements: Define logical networks with specific purpose or perform a particular function within your environment. One of the benefits of logical networks is that you can separate computer and network services with different business purposes without needing to change your physical infrastructure.
- Determine isolation requirements: Identify which logical networks need to be isolated and how that isolation will be enforced, either through physical separation, VLAN/PVLAN, or network virtualization. Remember that you need isolation if the logical network is going to be used by multiple tenants. If you have a single tenant or customer isolation is optional. In turn, if you don't need isolation you'll only need a single VM network that maps to the logical network.
- Determine the network sites, VLANs, PVLANs, and IP pools that need to be defined for each logical network you have identified.
- Figure out which logical network will associate with which virtualization hosts.
Plan logical networks, network sites, and IP address pools
Use the following table to plan for the logical networks, VM networks, and IP address pools you will need to support a virtualized infrastructure.
Item to review or determine | Description and (as needed) links within this topic |
---|---|
Logical networks already created by default by VMM | When you add a Hyper-V host to VMM, logical networks may be created by default, based on DNS suffixes. |
How many logical networks you need, and the purpose of each | Plan to create logical networks to represent the network topology for your hosts. For example, if you need a management network, a network used for cluster heartbeats, and a network used by virtual machines, create a logical network for each. |
Categories that your logical networks fall into | Review the purposes of your logical networks, and categorize them: - No isolation: For example, a cluster-heartbeat network for a host cluster. - VLAN: Isolation provided by your VLANs. - Virtualized: Provides a foundation for Hyper-V network virtualization. - External: Managed through a network manager (vendor network-management console or virtual switch extension manager) outside of VM. - IPAM: Managed through an IP Address Management (IPAM) server. |
How many network sites are needed in each logical network | One common way to plan network sites is around host groups and host locations. For example, for a 'Seattle' host group and a 'New York' host group, if you had a MANAGEMENT logical network, you might create two network sites, called MANAGEMENT - Seattle and MANAGEMENT - New York. |
Which VLANs and/or IP subnets are needed in each network site | The VLANs and IP subnets you assign should match your topology. |
Which logical networks (or specifically, which network sites) will need IP address pools | Determine which logical networks will use static IP addressing or load balancing, and which logical networks will be the foundation for network virtualization. For these logical networks, plan for IP address pools. |
Logical networks created by default
In the VMM console, Fabric >Networking > Logical networks, you might see logical networks created by VMM by default. VMM creates these networks to ensure that when you add a host, you have at least one logical network for deploying virtual machines and services. No network sites are created automatically.
To illustrate how these settings work, suppose that you have not changed the settings, and you add a Hyper-V host to VMM management. In this case, VMM automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter. On the logical network,VMM also creates a VM network that is configured with “no isolation.” For example, if the DNS suffix for the host network adapter was corp.contoso.com, VMM would create a logical network named “corp,” and on it, a VM network named “corp” that is configured with no isolation.
Guidelines for network sites: VLAN and IP subnet settings
The main guideline specifying VLANs and IP subnets for network sites is to reflect your network topology. https://prestigeclever404.weebly.com/blog/nitropdf-for-mac. For details, see the following table.
Note
Network sites are sometimes referred to as 'logical network definitions,' for example, in Windows PowerShell commands.
Purpose of logical network | Guideline for network sites in that logical network |
---|---|
Static IP: Logical network that will use static IP addressing, for example, a network that supports host cluster nodes | Create at least one network site and associate at least one IP subnet with the network site. |
DHCP (but not VLANs): Logical network that does not include VLANs, with all computers or devices using DHCP | No network sites are necessary. |
VLANs: Logical network for VLAN-based independent networks | - If the VLANs use static IP addressing, create corresponding network sites that specify VLAN and IP subnet information. - If the VLANs use DHCP, create corresponding network sites that specify only VLAN information (no subnets). |
Network virtualization: Logical network that will be the foundation for VM networks using network virtualization | Create at least one network site and associate at least one IP subnet with the site. The IP subnet is necessary because this logical network will need an IP address pool. Assign a VLAN to the network site if appropriate. |
Load balancing: Logical network that will include a load balancer that is managed by VMM | Create at least one network site and associate at least one IP subnet with the network site. |
Note
For an external network, that is, a network managed through a vendor network-management console or virtual switch extension manager outside of VMM, you can configure settings through the vendor network-management console, and allow them to be imported from the vendor network-management database into VMM.
Guidelines for IP address pools
In general, create IP address pools where you will use static IP addressing or load balancing; also create IP address pools on logical networks that will be the foundation for VM networks supporting network virtualization. VMM uses IP address pools to assign IP addresses to Hyper-V hosts that you deploy through VMM, and to Windows-based virtual machines that you deploy through VMM, regardless of the type of host they are running on (Hyper-V or VMware ESX).
The following table provides detailed guidelines. Additional information about IP address pools is provided after the table.
Purpose of logical network | Guideline for creating IP address pools for that logical network, or for VM networks built on that logical network |
---|---|
Static IP: Logical network with 'no isolation,' and requiring static IP addressing, for example, a network that supports host cluster nodes | Create one or more IP address pools for the logical network. For a logical network with 'no isolation,' if you create a VM network on the logical network, any IP address pools will automatically become available on the VM network. In other words, the VM network will give direct access to the logical network. |
VLANs: Logical network for VLAN-based independent networks, using static IP addressing (rather than DHCP) | Create IP address pools on the logical network—one IP address pool for each VLAN where static IP addressing will be used. Later, when you create the VM networks that represent the VLANs, the IP address pools will automatically become available on those VM networks. |
Network virtualization: Logical network that will be the foundation for VM networks using network virtualization | Create IP address pools on the logical network that provides the foundation for the VM networks. Later, when you create the VM networks, you will also create IP address pools on them (and see the important note after this table). If you use DHCP on the VM networks, VMM will respond to a DHCP request with an address from an IP address pool. The process of creating an IP address pool for a VM network is similar to the process of creating an IP address pool for a logical network. |
Load balancing: Logical network that will be the foundation for a VM network, where you will use load balancing in a 'service tier' (part of a set of virtual machines deployed together as a VMM 'service') | Create a static IP address pool on the VM network, and in it, define a reserved range of IP addresses. When you use VMM to deploy a load-balanced service tier that uses the VM network, VMM uses the reserved range of IP addresses to assign virtual IP (VIP) addresses to the load balancer. |
Important
If you configure a virtual machine to obtain a static IP address from an IP address pool, you must also configure the virtual machine to use a static MAC address. You can either specify the MAC address manually (during the Configure Settings step) or have VMM automatically assign a MAC address from a MAC address pool.
This requirement for static MAC addresses is necessary because VMM uses the MAC address to identify which network adapter to set the static IP address to, and this identification must happen before the virtual machine starts. Identifying the network adapter is especially important if a virtual machine has multiple adapters. If the MAC addresses were assigned dynamically through Hyper-V, VMM could not consistently identify the correct adapter to set a static IP address on.
How Many Mac Addresses Are Available For Virtual Network Adapters Created By Hyper-v
VMM provides static MAC address pools by default, but you can customize the pools. VMM provides static MAC address pools by default, but you can customize the pools.
- When you create a static IP address pool, you can configure associated information, such as default gateways, Domain Name System (DNS) servers, DNS suffixes, and Windows Internet Name Service (WINS) servers. All of these settings are optional.
- IP address pools support both IPv4 and IPv6 addresses. However, you cannot mix IPv4 and IPv6 addresses in the same IP address pool.
Note
After a virtual machine has been deployed in VMM, you can view the IP address or addresses assigned to that virtual machine. To do this, right-click the listing for the virtual machine, click Properties, click the Hardware Configuration tab, click the network adapter, and in the results pane, click Connection details.
Next steps
-->Applies To: Windows Server 2012
There are several different types of network traffic that you must consider and plan for when you deploy a highly available Hyper-V solution. You should design your network configuration with the following goals in mind:
- To ensure network quality of service
- To provide network redundancy
- To isolate traffic to defined networks
- Where applicable, take advantage of Server Message Block (SMB) Multichannel
This topic provides network configuration recommendations that are specific to a Hyper-V cluster that is running Windows Server 2012. It includes an overview of the different network traffic types, recommendations for how to isolate traffic, recommendations for features such as NIC Teaming, Quality of Service (QoS) and Virtual Machine Queue (VMQ), and a Windows PowerShell script that shows an example of converged networking, where the network traffic on a Hyper-V cluster is routed through one external virtual switch.
Windows Server 2012 supports the concept of converged networking, where different types of network traffic share the same Ethernet network infrastructure. In previous versions of Windows Server, the typical recommendation for a failover cluster was to dedicate separate physical network adapters to different traffic types. Improvements in Windows Server 2012, such as Hyper-V QoS and the ability to add virtual network adapters to the management operating system enable you to consolidate the network traffic on fewer physical adapters. Combined with traffic isolation methods such as VLANs, you can isolate and control the network traffic.
Important
If you use System Center Virtual Machine Manager (VMM) to create or manage Hyper-V clusters, you must use VMM to configure the network settings that are described in this topic.
In this topic:
Overview of different network traffic types
When you deploy a Hyper-V cluster, you must plan for several types of network traffic. The following table summarizes the different traffic types.
Network Traffic Type | Description |
---|---|
Management | - Provides connectivity between the server that is running Hyper-V and basic infrastructure functionality. - Used to manage the Hyper-V management operating system and virtual machines. |
Cluster | - Used for inter-node cluster communication such as the cluster heartbeat and Cluster Shared Volumes (CSV) redirection. |
Live migration | - Used for virtual machine live migration. |
Storage | - Used for SMB traffic or for iSCSI traffic. |
Replica traffic | - Used for virtual machine replication through the Hyper-V Replica feature. |
Virtual machine access | - Used for virtual machine connectivity. - Typically requires external network connectivity to service client requests. |
The following sections provide more detailed information about each network traffic type.
Management traffic
A management network provides connectivity between the operating system of the physical Hyper-V host (also known as the management operating system) and basic infrastructure functionality such as Active Directory Domain Services (AD DS), Domain Name System (DNS), and Windows Server Update Services (WSUS). It is also used for management of the server that is running Hyper-V and the virtual machines.
Coding programs for mac. The management network must have connectivity between all required infrastructure, and to any location from which you want to manage the server.
Cluster traffic
A failover cluster monitors and communicates the cluster state between all members of the cluster. This communication is very important to maintain cluster health. If a cluster node does not communicate a regular health check (known as the cluster heartbeat), the cluster considers the node down and removes the node from cluster membership. The cluster then transfers the workload to another cluster node.
Inter-node cluster communication also includes traffic that is associated with CSV. For CSV, where all nodes of a cluster can access shared block-level storage simultaneously, the nodes in the cluster must communicate to orchestrate storage-related activities. Also, if a cluster node loses its direct connection to the underlying CSV storage, CSV has resiliency features which redirect the storage I/O over the network to another cluster node that can access the storage.
Live migration traffic
Live migration enables the transparent movement of running virtual machines from one Hyper-V host to another without a dropped network connection or perceived downtime.
We recommend that you use a dedicated network or VLAN for live migration traffic to ensure quality of service and for traffic isolation and security. Live migration traffic can saturate network links. This can cause other traffic to experience increased latency. The time it takes to fully migrate one or more virtual machines depends on the throughput of the live migration network. Therefore, you must ensure that you configure the appropriate quality of service for this traffic. To provide the best performance, live migration traffic is not encrypted.
You can designate multiple networks as live migration networks in a prioritized list. For example, you may have one migration network for cluster nodes in the same cluster that is fast (10 GB), and a second migration network for cross-cluster migrations that is slower (1 GB).
All Hyper-V hosts that can initiate or receive a live migration must have connectivity to a network that is configured to allow live migrations. Because live migration can occur between nodes in the same cluster, between nodes in different clusters, and between a cluster and a stand-alone Hyper-V host, make sure that all these servers can access a live migration-enabled network.
Storage traffic
For a virtual machine to be highly available, all members of the Hyper-V cluster must be able to access the virtual machine state. This includes the configuration state and the virtual hard disks. To meet this requirement, you must have shared storage.
In Windows Server 2012, there are two ways that you can provide shared storage:
- Shared block storage. Shared block storage options include Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI, and shared Serial Attached SCSI (SAS).
- File-based storage. Remote file-based storage is provided through SMB 3.0.
SMB 3.0 includes new functionality known as SMB Multichannel. SMB Multichannel automatically detects and uses multiple network interfaces to deliver high performance and highly reliable storage connectivity.
By default, SMB Multichannel is enabled, and requires no additional configuration. You should use at least two network adapters of the same type and speed so that SMB Multichannel is in effect. Network adapters that support RDMA (Remote Direct Memory Access) are recommended but not required.
SMB 3.0 also automatically discovers and takes advantage of available hardware offloads, such as RDMA. A feature known as SMB Direct supports the use of network adapters that have RDMA capability. SMB Direct provides the best performance possible while also reducing file server and client overhead.
Note
The NIC Teaming feature is incompatible with RDMA-capable network adapters. Therefore, if you intend to use the RDMA capabilities of the network adapter, do not team those adapters.
Both iSCSI and SMB use the network to connect the storage to cluster members. Because reliable storage connectivity and performance is very important for Hyper-V virtual machines, we recommend that you use multiple networks (physical or logical) to ensure that these requirements are achieved.
Note
For more information about SMB Direct and SMB Multichannel, see Improve Performance of a File Server with SMB Direct and The basics of SMB Multichannel, a feature of Windows Server 2012 and SMB 3.0.
Replica traffic
Hyper-V Replica provides asynchronous replication of Hyper-V virtual machines between two hosting servers or Hyper-V clusters. Replica traffic occurs between the primary and Replica sites.
Hyper-V Replica automatically discovers and uses available network interfaces to transmit replication traffic. To throttle and control the replica traffic bandwidth, you can define QoS policies with minimum bandwidth weight.
If you use certificate-based authentication, Hyper-V Replica encrypts the traffic. If you use Kerberos-based authentication, traffic is not encrypted.
Virtual machine access traffic
Most virtual machines require some form of network or Internet connectivity. For example, workloads that are running on virtual machines typically require external network connectivity to service client requests. This can include tenant access in a hosted cloud implementation. Because multiple subclasses of traffic may exist, such as traffic that is internal to the datacenter and traffic that is external (for example to a computer outside the datacenter or to the Internet); one or more networks are required for these virtual machines to communicate.
To separate virtual machine traffic from the management operating system, we recommend that you use VLANs which are not exposed to the management operating system.
How to isolate the network traffic on a Hyper-V cluster
To provide the most consistent performance and functionality, and to improve network security, we recommend that you isolate the different types of network traffic.
Note
Realize that if you want to have a physical or logical network that is dedicated to a specific traffic type, you must assign each physical or virtual network adapter to a unique subnet. For each cluster node, Failover Clustering recognizes only one IP address per subnet.
Isolate traffic on the management network
We recommend that you use a firewall or IPsec encryption, or both, to isolate management traffic. In addition, you can use auditing to ensure that only defined and allowed communication is transmitted through the management network.
Isolate traffic on the cluster network
To isolate inter-node cluster traffic, you can configure a network to either allow cluster network communication or not to allow cluster network communication. For a network that allows cluster network communication, you can also configure whether to allow clients to connect through the network. (This includes client and management operating system access.)
A failover cluster can use any network that allows cluster network communication for cluster monitoring, state communication, and for CSV-related communication.
To configure a network to allow or not to allow cluster network communication, you can use Failover Cluster Manager or Windows PowerShell. To use Failover Cluster Manager, click Networks in the navigation tree. In the Networks pane, right-click a network, and then click Properties.
Figure 1. Failover Cluster Manager network properties
The following Windows PowerShell example configures a network named Management Network to allow cluster and client connectivity.
The Role property has the following possible values.
Value | Network Setting |
---|---|
0 | Do not allow cluster network communication |
1 | Allow cluster network communication only |
3 | Allow cluster network communication and client connectivity |
The following table shows the recommended settings for each type of network traffic. Realize that virtual machine access traffic is not listed because these networks should be isolated from the management operating system by using VLANs that are not exposed to the host. Therefore, virtual machine networks should not appear in Failover Cluster Manager as cluster networks.
Network Type | Recommended Setting |
---|---|
Management | Both of the following: - Allow cluster network communication on this network - Allow clients to connect through this network |
Cluster | Allow cluster network communication on this network Note: Clear the Allow clients to connect through this network check box. |
Live migration | Allow cluster network communication on this network Note: Clear the Allow clients to connect through this network check box. |
Storage | Do not allow cluster network communication on this network |
Replica traffic | Both of the following: - Allow cluster network communication on this network - Allow clients to connect through this network |
Isolate traffic on the live migration network
By default, live migration traffic uses the cluster network topology to discover available networks and to establish priority. However, you can manually configure live migration preferences to isolate live migration traffic to only the networks that you define. To do this, you can use Failover Cluster Manager or Windows PowerShell. To use Failover Cluster Manager, in the navigation tree, right-click Networks, and then click Live Migration Settings.
Figure 2. Live migration settings in Failover Cluster Manager
The following Windows PowerShell example enables live migration traffic only on a network that is named Migration_Network.
Isolate traffic on the storage network
To isolate SMB storage traffic, you can use Windows PowerShell to set SMB Multichannel constraints. SMB Multichannel constraints restrict SMB communication between a given file server and the Hyper-V host to one or more defined network interfaces.
For example, the following Windows PowerShell command sets a constraint for SMB traffic from the file server FileServer1 to the network interfaces SMB1, SMB2, SMB3, and SMB4 on the Hyper-V host from which you run this command.
![Virtual Virtual](/uploads/1/3/3/9/133916231/278868192.png)
For more information, see New-SmbMultichannelConstraint.
To isolate iSCSI traffic, configure the iSCSI target with interfaces on a dedicated network (logical or physical). Use the corresponding interfaces on the cluster nodes when you configure the iSCSI initiator.
Isolate traffic for replication
To isolate Hyper-V Replica traffic, we recommend that you use a different subnet for the primary and Replica sites.
If you want to isolate the replica traffic to a particular network adapter, you can define a persistent static route which redirects the network traffic to the defined network adapter. To specify a static route, use the following command:
route add <destination> mask <subnet mask and gateway> if <interface> -p
For example, to add a static route to the 10.1.17.0 network (example network of the Replica site) that uses a subnet mask of 255.255.255.0, a gateway of 10.0.17.1 (example IP address of the primary site), where the interface number for the adapter that you want to dedicate to replica traffic is 8, run the following command:
route add 10.1.17.1 mask 255.255.255.0 10.0.17.1 if 8 -p
NIC Teaming (LBFO) recommendations
We recommend that you team physical network adapters in the management operating system. This provides bandwidth aggregation and network traffic failover if a network hardware failure or outage occurs.
The NIC Teaming feature, also known as load balancing and failover (LBFO), provides two basic sets of algorithms for teaming.
- Switch-dependent modes. Requires the switch to participate in the teaming process. Typically requires all the network adapters in the team to be connected to the same switch.
- Switch-independent modes. Does not require the switch to participate in the teaming process. Although not required, team network adapters can be connected to different switches.
Both modes provide for bandwidth aggregation and traffic failover if a network adapter failure or network disconnection occurs. However, in most cases only switch-independent teaming provides traffic failover for a switch failure.
NIC Teaming also provides a traffic distribution algorithm that is optimized for Hyper-V workloads. This algorithm is referred to as the Hyper-V port load balancing mode. This mode distributes the traffic based on the MAC address of the virtual network adapters. The algorithm uses round robin as the load-balancing mechanism. For example, on a server that has two teamed physical network adapters and four virtual network adapters, the first and third virtual network adapter will use the first physical adapter, and the second and fourth virtual network adapter will use the second physical adapter. Hyper-V port mode also enables the use of hardware offloads such as virtual machine queue (VMQ) which reduces CPU overhead for networking operations.
How Many Mac Addresses Are Available For Virtual Network Adapters Created By Hyper-v By Default
Recommendations
For a clustered Hyper-V deployment, we recommend that you use the following settings when you configure the additional properties of a team.
Property Name | Recommended Setting |
---|---|
Teaming mode | Switch Independent (the default setting) |
Load balancing mode | Hyper-V Port |
Note
NIC teaming will effectively disable the RDMA capability of the network adapters. If you want to use SMB Direct and the RDMA capability of the network adapters, you should not use NIC Teaming.
For more information about the NIC Teaming modes and how to configure NIC Teaming settings, see Windows Server 2012 NIC Teaming (LBFO) Deployment and Management and NIC Teaming Overview.
Quality of Service (QoS) recommendations
You can use QoS technologies that are available in Windows Server 2012 to meet the service requirements of a workload or an application. QoS provides the following:
- Measures network bandwidth, detects changing network conditions (such as congestion or availability of bandwidth), and prioritizes - or throttles - network traffic.
- Enables you to converge multiple types of network traffic on a single adapter.
- Includes a minimum bandwidth feature which guarantees a certain amount of bandwidth to a given type of traffic.
We recommend that you configure appropriate Hyper-V QoS on the virtual switch to ensure that network requirements are met for all appropriate types of network traffic on the Hyper-V cluster.
Note
You can use QoS to control outbound traffic, but not the inbound traffic. For example, with Hyper-V Replica, you can use QoS to control outbound traffic (from the primary server), but not the inbound traffic (from the Replica server).
Recommendations
For a Hyper-V cluster, we recommend that you configure Hyper-V QoS that applies to the virtual switch. When you configure QoS, do the following:
- Configure minimum bandwidth in weight mode instead of in bits per second. Minimum bandwidth specified by weight is more flexible and it is compatible with other features, such as live migration and NIC Teaming. For more information, see the MinimumBandwidthMode parameter in New-VMSwitch.
- Enable and configure QoS for all virtual network adapters. Assign a weight to all virtual adapters. For more information, see Set-VMNetworkAdapter. To make sure that all virtual adapters have a weight, configure the DefaultFlowMinimumBandwidthWeight parameter on the virtual switch to a reasonable value. For more information, see Set-VMSwitch.
The following table recommends some generic weight values. You can assign a value from 1 to 100. For guidelines to consider when you assign weight values, see Guidelines for using Minimum Bandwidth.
Network Classification | Weight |
---|---|
Default weight | 0 |
Virtual machine access | 1, 3 or 5 (low, medium and high-throughput virtual machines) |
Cluster | 10 |
Management | 10 |
Replica traffic | 10 |
Live migration | 40 |
Storage | 40 |
Virtual machine queue (VMQ) recommendations
Virtual machine queue (VMQ) is a feature that is available to computers that have VMQ-capable network hardware. VMQ uses hardware packet filtering to deliver packet data from an external virtual network directly to virtual network adapters. This reduces the overhead of routing packets. When VMQ is enabled, a dedicated queue is established on the physical network adapter for each virtual network adapter that has requested a queue.
Not all physical network adapters support VMQ. Those that do support VMQ will have a fixed number of queues available, and the number will vary. To determine whether a network adapter supports VMQ, and how many queues they support, use the Get-NetAdapterVmq cmdlet.
You can assign virtual machine queues to any virtual network adapter. This includes virtual network adapters that are exposed to the management operating system. Queues are assigned according to a weight value, in a first-come first-serve manner. By default, all virtual adapters have a weight of 100.
Recommendations
We recommend that you increase the VMQ weight for interfaces with heavy inbound traffic, such as storage and live migration networks. To do this, use the Set-VMNetworkAdapter Windows PowerShell cmdlet.
Example of converged networking: routing traffic through one Hyper-V virtual switch
The following Windows PowerShell script shows an example of how you can route traffic on a Hyper-V cluster through one Hyper-V external virtual switch. The example uses two physical 10 GB network adapters that are teamed by using the NIC Teaming feature. The script configures a Hyper-V cluster node with a management interface, a live migration interface, a cluster interface, and four SMB interfaces. After the script, there is more information about how to add an interface for Hyper-V Replica traffic. The following diagram shows the example network configuration.
Figure 3. Example Hyper-V cluster network configuration
The example also configures network isolation which restricts cluster traffic from the management interface, restricts SMB traffic to the SMB interfaces, and restricts live migration traffic to the live migration interface.
Hyper-V Replica considerations
If you also use Hyper-V Replica in your environment, you can add another virtual network adapter to the management operating system for replica traffic. For example:
Note
If you are instead using policy-based QoS, where you can throttle outgoing traffic regardless of the interface on which it is sent, you can use either of the following methods to throttle Hyper-V Replica traffic:Create a QoS policy that is based on the destination port. In the following example, the network listener on the Replica server or cluster has been configured to use port 8080 to receive replication traffic.
Appendix: Encryption
![Network Network](/uploads/1/3/3/9/133916231/495696599.jpg)
Cluster traffic
By default, cluster communication is not encrypted. You can enable encryption if you want. However, realize that there is performance overhead that is associated with encryption. To enable encryption, you can use the following Windows PowerShell command to set the security level for the cluster.
The following table shows the different security level values.
Security Description | Value |
---|---|
Clear text | 0 |
Signed (default) | 1 |
Encrypted | 2 |
Live migration traffic
Live migration traffic is not encrypted. You can enable IPsec or other network layer encryption technologies if you want. However, realize that encryption technologies typically affect performance.
SMB traffic
By default, SMB traffic is not encrypted. Therefore, we recommend that you use a dedicated network (physical or logical) or use encryption. For SMB traffic, you can use SMB encryption, layer-2 or layer-3 encryption. SMB encryption is the preferred method.
Replica traffic
If you use Kerberos-based authentication, Hyper-V Replica traffic is not encrypted. We strongly recommend that you encrypt replication traffic that transits public networks over the WAN or the Internet. We recommend Secure Sockets Layer (SSL) encryption as the encryption method. You can also use IPsec. However, realize that using IPsec may significantly affect performance.