In this step-by-step guide, you’ll learn how to install and configure the Cisco Nexus Dashboard in an Amazon Web Services (AWS) environment. This guide includes instructions on how to deploy a virtual Nexus Dashboard (vND) cluster into AWS, build the Nexus Dashboard cluster, and validate that it is successfully up and running. It also covers how to increase AWS Elastic IP quotas, create necessary AWS policies, and subscribe to the Nexus Dashboard in the AWS console. The guide is intended for those with basic knowledge of AWS and networking, and it assumes that the reader has an AWS account and a laptop with SSH capability and a web browser.

What You’ll Learn

What You’ll Need

Some Tips to Remember

  1. Navigate to the AWS Quotas console by searching for “quotas” in the AWS console.

Search for quoatas

  1. Click under Amazon Elastic Compute Cloud (Amazon EC2):

Click Amazon Elastic Compute Cloud (Amazon EC2)

  1. Search for Elastic IPs. Click the quota name EC2-VPC Elastic IPs:

Locating the EC2-VPC Elastic IPs

  1. Find Recent quota increase requests and click to begin the process of increasing your Elastic IPs:

Increase the Elastic IPs quota

  1. Input however many IP numbers you need. Remember that for vND deployment, you need a minimum of six.

Input need IP number

  1. Navigate to VPC and create a VPC in an AWS region of your choice. Create a private CIDR of /16 to /24 in the VPC.

Navigate to the AWS VPC console

Create a AWS VPC

Create a AWS VPC

Be sure to choose a subnet that is /24 or /16, which is the CIDR size that the Nexus Dashboard CloudFormation template expects.

  1. Create an IGW and associate it with the VPC you created:

Navigate to the IGW Console

Click Create Internet Gateway

Navigate to the newly created IGW

Attach the new IGW to the previously created VPC

  1. Create a route table and associate it with the VPC. Add the default route 0/0, pointing to the IGW you created:

Find your VPC UUID

Find the assocatied Route Table for that created VPC

You can add a name to the route table now to make it more easily identifiable later.

Edit the route table

Add a new route and point it at the previously made IGW

Add a new route and point it at the previously made IGW

  1. Create a key pair in the AWS console by navigating to the EC2 console and finding the Key Pairs menu on the left navigation pane. Click Create key pair; name the key pair and choose the format for your SSH session.

Navigate to EC2 Console and find Key Pairs menu on the left navigation pane

Click Create Keypair

Name the keypair and choose the format for your SSH session

You can also create a key pair and upload it if you have your own key on your machine.

  1. Navigate to the following URL: https://aws.amazon.com/marketplace/pp/prodview-agdixxd5lgi6q
    • You can also search Google for “AWS Marketplace Nexus Dashboard” if this URL changes.
  2. Once you are at the Nexus Dashboard AWS Marketplace page, as shown below, navigate to the Continue to Subscribe button and click through the subscription GUI.

AWS Nexus Dashboard Marketplace subscription to ND

  1. Now, click the Continue to Configuration button:

AWS Nexus Dashboard Marketplace continueing to configuration of CloudFormation template

  1. Once you are in the CloudFormation configuration wizard, as shown below, select your desired software version and the needed AWS region:

AWS Nexus Dashboard Marketplace choosing Software version and AWS Region

  1. Click to continue to launch the CloudFormation template:

AWS Nexus Dashboard continue to Cloudformation configuration

  1. Once in the CloudFormation configuration screen, you can simply use the default options. Go ahead and scroll to the bottom to screen and click Next:

Start of the CloudFormation GUI

  1. Choose the AWS objects that we created in the previous steps. These include:
    • Stack name
    • VPC
    • Subnet CIDR
    • Availability Zones
    • Number of Availability Zones
    • Instance size
    • Cluster password
    • AWS SSH key pair
    • Subnet control access (which subnet can access the cluster; 0.0.0.0/0 is easiest, but you could be specific)

Input of CloudFormation Options

Input of CloudFormation Options2

Input of CloudFormation Options3

Input of CloudFormation Options4

Simply scroll to the bottom of the page; there is nothing to configure here. Click through to continue:

Input of CloudFormation Options5

Again, scroll to the bottom of the page; there is nothing to configure here. Click through to continue:

Input of CloudFormation Options6

Review the options you inputted. If it looks good, go ahead and scroll down to the bottom of the page and submit the stack.

  1. Once the stack has been submitted, you can watch the deployment progress on the screen:

Watching the deployment

  1. Upon completion, you should see something similar to the following in your AWS console:

Successsful vND CloudFormation Stack

  1. Navigate to the EC2 console and find the ND1 of your new Nexus Dashboard cluster:

Locating ND1 and Public IP of ND1

  1. Use your browser to navigate to the public IP and log in to ND1. Use your password that you inputted during the CloudFormation configurations.

Accessing ND WebUI

Accessing ND WebUI2

  1. When Nexus Dashboard is located in the cloud, it is likely that you won’t have a need for proxy configurations. YOU MUST DISABLE PROXY by hovering over the “i” icon and then clicking the Skip button. Once in the GUI, input the following:
    • Cluster name
    • NTP server: pool.ntp.org
    • DNS server: 8.8.8.8
    • Note that you must click the check marks to submit your inputs.

Configuring ND Cluster Infomation

  1. Let’s begin to configure the individual nodes. Starting on ND1, the information is already populated, so all you need to configure is the ND1 node name. You will see the Next button is greyed out until we have configured at least one node. Once you input the ND1 node name, you then add another node.

Configuring ND Cluster Infomation

Inputting ND1 Node Name

Add another ND node

  1. In order to add the other ND node, you need to gather its assigned IP address from the VPC CIDR of 100.0.0.0/24 that we had created previously. This IP is going to be used to validate the node in order to join the cluster. It will always be the lower of the two IP addresses.

Find Node 2 IP for clustering

ND2 IP Information

Copy the IP address and then validate the node in the Nexus Dashboard GUI. Be sure to provide ND2 with a node name and add the node.

Configuring ND2

  1. Now you can do the same procedure for ND3:

    • Navigate to the AWS EC2 console to first find the ND3 instance and its associated IP in the VPC.
    • Add the ND3 node, using the IP found in the AWS console.
    • Provide ND3 with a node name.
  2. Under Cluster and Nodes, review the configurations. If everything looks good, go ahead and click Configure. There will be one more final confirmation screen; go ahead and click through it if you are ready to go. The cluster-forming process can take 30 to 40 minutes. You can watch it proceed, or you can get some coffee :).

Reviewing ND Cluster Configuration

Final Review Cluster Info

Watching the ND Progress Bar and More Info

ND Cluster Login Screen

  1. To SSH to the cluster, find the public IP for any Nexus Dashboard node and use your key pair that you created for the Dashboard cluster. Be sure to change the permission on your key pair file.
aleccham@ALECCHAM-M-6D7G Downloads % chmod 0400 vnd-keypair.pem
aleccham@ALECCHAM-M-6D7G Downloads % ssh -o HostKeyAlgorithms=+ssh-rsa  -i vnd-keypair.pem rescue-user@3.232.141.23
rescue-user@ND1:~$
  1. Once in the cluster, you can check health and cluster information with the following commands:
rescue-user@ND1:~$ acs health
All components are healthy
rescue-user@ND1:~$
rescue-user@ND1:~$ acs show cluster
┌─────────────┬─────────┬────────────────────────┬─────────────┬─────────────────┬───────────────┐
│ NAME        │ VERSION │ DNS                    │ # OF NODES  │ SERVICE NETWORK │ APP NETWORK   │
├─────────────┼─────────┼────────────────────────┼─────────────┼─────────────────┼───────────────┤
│ vnd-cluster │ 2.2.2d  │ vnd-cluster.case.local │ Masters:  3 │ 100.80.0.0/16   │ 172.17.0.1/16 │
│             │         │                        │ Workers:  0 │ ::/0            │ ::/0          │
│             │         │                        │ Standbys: 0 │                 │               │
└─────────────┴─────────┴────────────────────────┴─────────────┴─────────────────┴───────────────┘
rescue-user@ND1:~$ acs show masters
┌────────────────────┬────────────────┬──────────┬─────────┬────────────────────┬────────────────────┬─────────┐
│ NAME (*=SELF)      │ SERIAL         │ VERSION  │ ROLE    │ DATANETWORK        │ MGMTNETWORK        │ STATUS  │
├────────────────────┼────────────────┼──────────┼─────────┼────────────────────┼────────────────────┼─────────┤
│ *ND1               │ E7DA2421F043   │ 2.2.2d   │ Master  │ 100.0.0.75/28      │ 100.0.0.8/28       │ Active  │
│                    │                │          │         │ ::/0               │ ::/0               │         │
│ ------------------ │ -------------- │ -------- │ ------- │ ------------------ │ ------------------ │ ------- │
│ ND2                │ F0733E4DB1C2   │ 2.2.2d   │ Master  │ 100.0.0.91/28      │ 100.0.0.24/28      │ Active  │
│                    │                │          │         │ ::/0               │ ::/0               │         │
│ ------------------ │ -------------- │ -------- │ ------- │ ------------------ │ ------------------ │ ------- │
│ ND3                │ 9901242F2F41   │ 2.2.2d   │ Master  │ 100.0.0.100/28     │ 100.0.0.39/28      │ Active  │
│                    │                │          │         │ ::/0               │ ::/0               │         │
└────────────────────┴────────────────┴──────────┴─────────┴────────────────────┴────────────────────┴─────────┘
rescue-user@ND1:~$ acs system-config
schemaVersion: "1.0"
firmwareVersion: 2.2.2d
firmwareUpgradeTime: ""
nodeUUID: b97f64b2-b844-4774-8bcd-f258747cd3c5
nodeSerialNumber: E7DA2421F043
nodeName: ND1
nodeDNSDomain: vnd-cluster.case.local
nodeType: Virtual
nodePlatformType: AWS
nodeRole: Master
model: SE-VIRTUAL-APP
oobNetwork:
  name: ""
  subnet: 100.0.0.8/28
  subnetv6: ""
  interface: bond1br
  uplinks:
  - mgmt0
  - mgmt1
  bond: bond1
  vlan: ""
  bridge: bond1br
  gatewayIP: 100.0.0.1
  ifaceIP: 100.0.0.8
  vlanID: ""
  iface: bond1
  vip: ""
  external: ""
inbandNetwork:
  name: ""
  subnet: 100.0.0.75/28
  subnetv6: ""
  interface: bond0br
  uplinks:
  - fabric0
  - fabric1
  bond: bond0
  vlan: ""
  bridge: bond0br
  gatewayIP: 100.0.0.65
  ifaceIP: 100.0.0.75
  vlanID: ""
  iface: ""
  vip: ""
  external: ""
dataNetwork0:
  name: ""
  subnet: ""
  subnetv6: ""
  interface: ""
  uplinks: []
  bond: ""
  vlan: ""
  bridge: ""
  vlanID: ""
  iface: ""
  vip: ""
  external: ""
dataNetwork1:
  name: ""
  subnet: ""
  subnetv6: ""
  interface: ""
  uplinks: []
  bond: ""
  vlan: ""
  bridge: ""
  vlanID: ""
  iface: ""
  vip: ""
  external: ""
appNetwork:
  name: ""
  subnet: 172.17.0.1/16
  subnetv6: ""
  interface: app-vnic
  uplinks: []
  bond: app-vnic
  vlan: ""
  bridge: ""
  ifaceIP: 172.17.0.1
  vlanID: ""
  iface: ""
  vip: ""
  external: ""
serviceNetwork:
  name: ""
  subnet: 100.80.0.0/16
  subnetv6: ""
  interface: ""
  uplinks: []
  bond: ""
  vlan: ""
  bridge: ""
  ifaceIP: 100.80.0.0
  vlanID: ""
  iface: ""
  vip: ""
  external: ""
clusterType: ""
clusterName: vnd-cluster
clusterUUID: 766e642d-636c-7573-7465-720000000000
clusterNumMasters: 3
clusterVersion: 2.2.2d
seedList:
- ipAddress: 100.0.0.91
  ipv6Address: ""
  serialNumber: F0733E4DB1C2
  name: ND2
- ipAddress: 100.0.0.100
  ipv6Address: ""
  serialNumber: 9901242F2F41
  name: ND3
workerList: []
standbyList: []
ntpServers:
- pool.ntp.org
nameServers:
- 8.8.8.8
searchDomains: []
mode: standalone
firstMaster: true
containerRuntime: docker
timezone: ""
rescue-user@ND1:~$

Congratulations! Your Nexus Dashboard cluster is now running in AWS. You can now begin to add sites or to install the Cisco Nexus Dashboard application onto the cluster. By following the step-by-step instructions, you have successfully deployed a vND cluster, built and validated the cluster, increased AWS Elastic IP quotas, created necessary AWS policies, and subscribed to the Cisco Nexus Dashboard in the AWS console.

Learn More