This lab guide is for one of the rare cases in NSX-T which is the BM edge, which is a dedicated physical server that runs a special version of NSX-T Edge software, which provides the following:
- Higher bandwidth
- Sub-second failure detection between physical and edge node
- Dedicated pNICS for Overlay
- Dedicated pNICS dedicated for external traffic
A lot more details and different design considerations can be found in the reference design guide:
https://nsx.techzone.vmware.com/resource/nsx-t-reference-design-guide-3-0#
However, in this document we will focus mainly on the configuration part and specifically on the use case, for
Single N-VDS Bare Metal configuration with six pNICs.Referring to the below diagram, we will use the following:

- Management traffic has 2 dedicated pNICs bonded in active stand by
- Multi-TEP configured to load balance overlay traffic on Uplink 1 and Uplink 2
- Deterministic BGP peering using additional named teaming policies, for VLAN 300 and VLAN 400 respectively, so Uplink 3 will be used to peer with the left TOR and uplink 4 will be used to peer with the right TOR.
Deployment
- Download the bare metal edge iso from:

- Create an empty VM in the ESXi with NICs of

- Set the VM resources as the below to meet the minimum BM edge requirements

- Attach the downloaded ISO file

- Power on the VM and choose automated install

- Choose the first interface (used later for management)

- Login using the username admin and the password default, you will then be required to reset the admin password

- Since we are going to configure out of band management which supports only active stand by and no VLAN tagging, that’s why the ports connected to the management pNICs should be access ports, below is the command for this configuration from the edge node perspective:

- To ease things up, I logged in with root and enabled ssh on the BM Edge, so I can ssh into it instead of console access
- The next step is to join the nsx management plane
- We need to get the certificate api thumbprint from the nsx manager

Run the below command from the edge node:

Bare Metal Edge Configuration
After Edge Node registration, we’ll log in to the NSX manager UI and the below is what we see once we navigate to the edge nodes page:

Prior to proceeding with the configuration of the edge node, we will be creating a VLAN TZ with a named teaming policy specified, the reason behind this is if we check the BM edge interfaces again:

We want fp-eth0 and fp-eth1 to carry the overlay traffic and fp-eth2,fp-eth3 to carry the VLAN traffic used for BGP peering, and to accomplish this we will be utilizing the named teaming policy.
So First step is to create a VLAN TZ:

Second is to create an uplink profile similar to the below

We will see later on where we will reference the named teaming policy named1 and named2
Now let’s get back to the edge node and proceed with the configuration:

Upon completion we will notice that the BM edge is configured successfully with the respective IPs

In the next section we will create the Edge Uplink segments and specify the named teaming policy under each VLAN so the BGP traffic is pinned respectively to fp-eth2 (VLAN 300) and fp-eth3 (VLAN 400)
In order to create a T0 on the newly created edge node we need to create the edge uplink segments used for peering and also configure the edge cluster (normally 2 or more nodes, but since this is a lab we will use only one node which we configured earlier)
Let’s create VLAN 300 and VLAN 400 segments, we will start with VLAN 300:

Notice here two things, the VLAN TZ defined is the same one defined in the Edge node configuration, the named teaming policy defined here (named1) is the same as the one defined when we created the TZ and used in the uplink profile
Similarly, for VLAN 400

Next step is to create the edge cluster:

T0 preparation
Create a T0 based on the configured Edge Cluster:

Next we configure the interfaces based on the previously created uplink segments:


The corresponding point to point interfaces are configured on the Cisco for VLAN 300 and 400

For the sake of simplicity the above interfaces are configured on the same router instead of two routers as per the diagram and since BGP configuration is pretty straight forward we will proceed without it but what matters here is that we need to validate that VLAN 300 is pinned to fp-eth2 and VLAN 400 is pinned to fp-eth3 and since the mapping is not exact (for example eth0 and eth1 configured for management is mapped to vmnic1 and vmnic5 of the VM) so to make sure that fp-eth1 is mapped to VLAN 300 we are going to ping from the CSR the point to point interface 10.30.1.2 and disconnect the vmnics (2,3,4,6) one at a time, to prove that the named teaming policy is correct and that it follows the below diagram:

Named teaming policy Validation

Upon disconnecting vmnic6, the ping fails


After reconnecting vmnic6 the ping succeeds and

This means that fp-eth2 corresponds to vmnic6 and that VLAN 300 traffic comes only of one interface that follows the named teaming policy.
Let’s do the same test for VLAN 400 and we found out that fp-eth3 corresponds to vmnic4 which follows the named teaming policy as well
Which leaves us with vmnic2 and vmnic3 used for overlay traffic since the default teaming policy is always used for overlay traffic.








































































































