Quantcast
Channel: PowerShell – Tech-Coffee
Viewing all articles
Browse latest Browse all 40

How to deploy a converged network with Windows Server 2016

$
0
0

If you read the news regularly, you probably have heard something about converged network. This is a model where the same network adapters are used to handle several kinds of traffics such as storage, live-migration, virtual machine networks and so on. This design brings flexibility, simplicity and is cost effective.

First of all, this design is flexible because you can add and remove networks easily. Let’s assume that you need to connect your virtual machines to a new network. To make the new configuration you just have to set the switch to add the right VLAN ID (VID) and reconfigure the VM with the right VID. It is easier than add a new network adapter and move the VM to a new virtual switch and this is why I said that network convergence brings simplicity.

Because you don’t need to buy a lot of network adapters, this design is also cost effective. With Windows Server 2016, we are able to converge almost everything, especially SMB traffics. So you can buy two network adapters faster than 10GB/s and converged all your traffics. Thanks to this design, you can have only three network cables plugged into each server (one for the Baseboard Management Controller and two for the networks). Can you imagine the simplicity of your cable management in your datacenter?

Converged Network overview

To understand the network convergence, it is important to apprehend some network technologies. To deploy this kind of design, you need to understand the difference between an Access mode and a Trunk mode.

When you plug a standard server (I mean not a hypervisor) to a switch, this server doesn’t need to manage several VLAN. This server belongs to a specific VLAN and I don’t want that it can communicate in another VLAN. From switch perspective, you have to configure the related switch port to Access mode with a single VID.

On the other hand, some hardware need to communicate in several VLAN. The first example is the link between two switches. The second example is the hypervisor. When you deploy a hypervisor, it must communicate with several networks such as heartbeat, live-migration, management, storage and so on. From switch perspective, you have to configure the related port in Trunk mode.

Network convergence is based on switch ports configured to trunk mode. Each switch port connected to the hypervisor is set to trunk mode and with the required VLAN. Then all the configuration is done from the operating system. So let’s assume you need 20 hypervisors with two 25GB/s Network Interface Controllers (NICs) each. All you need is two dedicated switches with 24 ports at least and configure all switch ports in trunk mode. Then you just have to plug the servers and to configure the operating systems.

From the operating system perspective, Windows Server 2016 brings Switch Embedded Teaming (SET) which enables to converge almost everything. Before Windows Server 2016, you had to create a teaming and then create the virtual switches bound to the teaming virtual NIC. Some features were not supported in the parent partition as RDMA, vRSS and so on. So it was difficult to converge every networks. Now with SET, the teaming is managed inside the virtual switch and more features are supported especially vRSS and RDMA.

Once the SET is created, you just have to create virtual NICs in order to the hypervisor can communicate with the required networks (such as storage, live-migration, management and so on).

Design of the example

To explain you with more details the converged network, I’ll make a deployment example. Please find below the required network configuration:

  • Management network (VID 10) – Native VLAN (1x vNIC)
  • Live-Migration network: (VID 11) – require RDMA (1x vNIC)
  • Storage network (VID 12) – Require RDMA (2x vNIC)
  • Heartbeat network (VID 13) – (1x vNIC)
  • VM Network 1 (VID 100)
  • VM Network 2 (VID 101)

The Storage and live-migrations NICs are used for SMB Direct traffics.

The server where I make the example has a Mellanox ConnectX-3 Pro network adapter with two controllers.

Deploy a converged network

Create the virtual switch

First of all, I run a Get-NetworkAdapter to get the network adapters that I want to add to the Switch Embedded Teaming:

Then I create a Switch Embedded Teaming with these both network adapters:

New-VMSwitch -Name CNSwitch -AllowManagementOS $True -NetAdapterName CNA01,CNA02 -EnableEmbeddedTeaming $True

Now I gather virtual switch information with Get-VMSwitch cmdlet to verify that my network adapters are bound to this vSwitch:

Create virtual NICs

Next I create the virtual NICs for management, live-migration, storage and heartbeat. To create vNICs, I use the cmdlet add-VMNetworkAdapter with -ManagementOS parameter. If you don’t use -ManagementOS, you create a vNIC for a virtual machine.

I verify that my vNICs are created:

Now that we have created the vNICs, we have to associate them to the right VLAN ID. To make this configuration, I run the Set-VMNetworkAdapterVLAN cmdlet. I configure the vNIC in access mode in order to they tag packets:

$Nic = Get-VMNetworkAdapter -Name Live-Migration-11 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 11

$Nic = Get-VMNetworkAdapter -Name Storage01-12 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 12

$Nic = Get-VMNetworkAdapter -Name Storage02-12 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 12

$Nic = Get-VMNetworkAdapter -Name Heartbeat-13 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 13

The management vNIC requires a specific configuration because we want configure the native VLAN on this port. The native VLAN enables to send packets untagged and leave the switch make the tagging when the packet comes into the port. The switch tags the packet with the native VLAN ID. It is really useful especially for deploying your servers with PXE / DHCP. In order to vNIC leaves packet untagged, we have to use the special VID 0.

$Nic = Get-VMNetworkAdapter -Name Management-10 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 0

Enable RDMA

Now that vNICs are deployed, we can enable RDMA. We have seen that storage and live-migration requires this feature. To enable RDMA on vNICs Storage01-12, Storage02,12 and Live-Migration-11, you can run the following cmdlets:

Get-NetAdapterRDMA -Name *Storage* | Enable-NetAdapterRDMA
Get-NetAdapterRDMA -Name *Live-Migration* | Enable-NetAdapterRDMA

Then I verify if the feature is enabled:

Deal with QoS

Because storage and live-migration are converged with the other traffics, we need to give the priority over the others. If you are using a RoCE (RDMA over Converged Ethernet), you can run the following script (taken from Microsoft):

# Turn on DCB
Install-WindowsFeature Data-Center-Bridging

# Set a policy for SMB-Direct
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3

# Turn on Flow Control for SMB
Enable-NetQosFlowControl -Priority 3

# Make sure flow control is off for other traffic
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7

# Apply policy to the target adapters
Enable-NetAdapterQos -InterfaceAlias "CNA01"
Enable-NetAdapterQos -InterfaceAlias "CNA02"


# Give SMB Direct 50% of the bandwidth minimum
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETS

Priority Flow Control (PFC), must also be configured in switches. With this script, the SMB traffic has 50% of the bandwidth at least and the other traffic will share the remaining 50%.

Set affinity between a vNIC and a physical NIC

In this example, I have deployed two vNICs dedicated for storage. Because these vNICs are bound to a SET, I am not sure that the storage traffic will be well spread across the two physical NICs. So I create an affinity rule between vNIC and physical NIC. While the physical NIC is online, the associated vNIC will use this specific physical NIC:

Set-VMNetworkAdapterTeamMapping –VMNetworkAdapterName Storage01-12 –ManagementOS –PhysicalNetAdapterName CNA01
Set-VMNetworkAdapterTeamMapping –VMNetworkAdapterName Storage02-12 –ManagementOS –PhysicalNetAdapterName CNA02

If CNA01 fails, the Storage01-12 vNICs will be bound to CNA02 physical NIC. When the CNA01 will be online again, the Storage01-12 vNIC will be reassociated to CNA01.

Configure the IP addresses

Now you can configure the IP Addresses for each vNICs:

New-NetIPAddress -InterfaceAlias "vEthernet (Management-10)" -IPAddress 10.10.0.5 -PrefixLength 24 -Type Unicast
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management-10)" -ServerAddresses 10.10.0.20

New-NetIPAddress -InterfaceAlias "vEthernet (Live-Migration-11)" -IPAddress 10.10.11.5 -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (Storage01-12)" -IPAddress 10.10.12.5 -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (Storage02-12)" -IPAddress 10.10.12.6 -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (Heartbeat-13)" -IPAddress 10.10.13.5 -PrefixLength 24 -Type Unicast

The both storage adapter IP addresses belong to the same network. Since Windows Server 2016 we can add two SMB NICs to the same network. It is possible thanks to Simplified SMB MultiChannel feature.

Connect virtual machines

Now that your Hyper-V is configured, you can create a VM and connect it to the virtual switch. All you have to do is to configure the VID and the vSwitch.

Conclusion

With Windows Server 2016, the network convergence is easier to deploy than with Windows Server 2012R2. Network convergence brings you, flexibility, simplicity, a better cable management and quick deployment. Because you need less network adapters, this design reduces the cost of the solution. As we have seen with this example, the network convergence is easy to deploy and can be fully automated with PowerShell. If you have Virtual Machine Manager, you can also make this kind of configuration from the VMM console.

The post How to deploy a converged network with Windows Server 2016 appeared first on Tech-Coffee.


Viewing all articles
Browse latest Browse all 40

Trending Articles