In this step-by-step guide, you’ll learn how to install and configure the Cisco Nexus Dashboard in an Amazon Web Services (AWS) environment. This guide includes instructions on how to deploy a virtual Nexus Dashboard (vND) cluster into AWS, build the Nexus Dashboard cluster, and validate that it is successfully up and running. It also covers how to increase AWS Elastic IP quotas, create necessary AWS policies, and subscribe to the Nexus Dashboard in the AWS console. The guide is intended for those with basic knowledge of AWS and networking, and it assumes that the reader has an AWS account and a laptop with SSH capability and a web browser.
Be sure to choose a subnet that is /24 or /16, which is the CIDR size that the Nexus Dashboard CloudFormation template expects.
You can add a name to the route table now to make it more easily identifiable later.
You can also create a key pair and upload it if you have your own key on your machine.
Simply scroll to the bottom of the page; there is nothing to configure here. Click through to continue:
Again, scroll to the bottom of the page; there is nothing to configure here. Click through to continue:
Review the options you inputted. If it looks good, go ahead and scroll down to the bottom of the page and submit the stack.
pool.ntp.org
8.8.8.8
Copy the IP address and then validate the node in the Nexus Dashboard GUI. Be sure to provide ND2 with a node name and add the node.
Now you can do the same procedure for ND3:
Under Cluster and Nodes, review the configurations. If everything looks good, go ahead and click Configure. There will be one more final confirmation screen; go ahead and click through it if you are ready to go. The cluster-forming process can take 30 to 40 minutes. You can watch it proceed, or you can get some coffee :).
aleccham@ALECCHAM-M-6D7G Downloads % chmod 0400 vnd-keypair.pem
aleccham@ALECCHAM-M-6D7G Downloads % ssh -o HostKeyAlgorithms=+ssh-rsa -i vnd-keypair.pem rescue-user@3.232.141.23
rescue-user@ND1:~$
rescue-user@ND1:~$ acs health
All components are healthy
rescue-user@ND1:~$
rescue-user@ND1:~$ acs show cluster
┌─────────────┬─────────┬────────────────────────┬─────────────┬─────────────────┬───────────────┐
│ NAME │ VERSION │ DNS │ # OF NODES │ SERVICE NETWORK │ APP NETWORK │
├─────────────┼─────────┼────────────────────────┼─────────────┼─────────────────┼───────────────┤
│ vnd-cluster │ 2.2.2d │ vnd-cluster.case.local │ Masters: 3 │ 100.80.0.0/16 │ 172.17.0.1/16 │
│ │ │ │ Workers: 0 │ ::/0 │ ::/0 │
│ │ │ │ Standbys: 0 │ │ │
└─────────────┴─────────┴────────────────────────┴─────────────┴─────────────────┴───────────────┘
rescue-user@ND1:~$ acs show masters
┌────────────────────┬────────────────┬──────────┬─────────┬────────────────────┬────────────────────┬─────────┐
│ NAME (*=SELF) │ SERIAL │ VERSION │ ROLE │ DATANETWORK │ MGMTNETWORK │ STATUS │
├────────────────────┼────────────────┼──────────┼─────────┼────────────────────┼────────────────────┼─────────┤
│ *ND1 │ E7DA2421F043 │ 2.2.2d │ Master │ 100.0.0.75/28 │ 100.0.0.8/28 │ Active │
│ │ │ │ │ ::/0 │ ::/0 │ │
│ ------------------ │ -------------- │ -------- │ ------- │ ------------------ │ ------------------ │ ------- │
│ ND2 │ F0733E4DB1C2 │ 2.2.2d │ Master │ 100.0.0.91/28 │ 100.0.0.24/28 │ Active │
│ │ │ │ │ ::/0 │ ::/0 │ │
│ ------------------ │ -------------- │ -------- │ ------- │ ------------------ │ ------------------ │ ------- │
│ ND3 │ 9901242F2F41 │ 2.2.2d │ Master │ 100.0.0.100/28 │ 100.0.0.39/28 │ Active │
│ │ │ │ │ ::/0 │ ::/0 │ │
└────────────────────┴────────────────┴──────────┴─────────┴────────────────────┴────────────────────┴─────────┘
rescue-user@ND1:~$ acs system-config
schemaVersion: "1.0"
firmwareVersion: 2.2.2d
firmwareUpgradeTime: ""
nodeUUID: b97f64b2-b844-4774-8bcd-f258747cd3c5
nodeSerialNumber: E7DA2421F043
nodeName: ND1
nodeDNSDomain: vnd-cluster.case.local
nodeType: Virtual
nodePlatformType: AWS
nodeRole: Master
model: SE-VIRTUAL-APP
oobNetwork:
name: ""
subnet: 100.0.0.8/28
subnetv6: ""
interface: bond1br
uplinks:
- mgmt0
- mgmt1
bond: bond1
vlan: ""
bridge: bond1br
gatewayIP: 100.0.0.1
ifaceIP: 100.0.0.8
vlanID: ""
iface: bond1
vip: ""
external: ""
inbandNetwork:
name: ""
subnet: 100.0.0.75/28
subnetv6: ""
interface: bond0br
uplinks:
- fabric0
- fabric1
bond: bond0
vlan: ""
bridge: bond0br
gatewayIP: 100.0.0.65
ifaceIP: 100.0.0.75
vlanID: ""
iface: ""
vip: ""
external: ""
dataNetwork0:
name: ""
subnet: ""
subnetv6: ""
interface: ""
uplinks: []
bond: ""
vlan: ""
bridge: ""
vlanID: ""
iface: ""
vip: ""
external: ""
dataNetwork1:
name: ""
subnet: ""
subnetv6: ""
interface: ""
uplinks: []
bond: ""
vlan: ""
bridge: ""
vlanID: ""
iface: ""
vip: ""
external: ""
appNetwork:
name: ""
subnet: 172.17.0.1/16
subnetv6: ""
interface: app-vnic
uplinks: []
bond: app-vnic
vlan: ""
bridge: ""
ifaceIP: 172.17.0.1
vlanID: ""
iface: ""
vip: ""
external: ""
serviceNetwork:
name: ""
subnet: 100.80.0.0/16
subnetv6: ""
interface: ""
uplinks: []
bond: ""
vlan: ""
bridge: ""
ifaceIP: 100.80.0.0
vlanID: ""
iface: ""
vip: ""
external: ""
clusterType: ""
clusterName: vnd-cluster
clusterUUID: 766e642d-636c-7573-7465-720000000000
clusterNumMasters: 3
clusterVersion: 2.2.2d
seedList:
- ipAddress: 100.0.0.91
ipv6Address: ""
serialNumber: F0733E4DB1C2
name: ND2
- ipAddress: 100.0.0.100
ipv6Address: ""
serialNumber: 9901242F2F41
name: ND3
workerList: []
standbyList: []
ntpServers:
- pool.ntp.org
nameServers:
- 8.8.8.8
searchDomains: []
mode: standalone
firstMaster: true
containerRuntime: docker
timezone: ""
rescue-user@ND1:~$
Congratulations! Your Nexus Dashboard cluster is now running in AWS. You can now begin to add sites or to install the Cisco Nexus Dashboard application onto the cluster. By following the step-by-step instructions, you have successfully deployed a vND cluster, built and validated the cluster, increased AWS Elastic IP quotas, created necessary AWS policies, and subscribed to the Cisco Nexus Dashboard in the AWS console.