In this comprehensive tutorial, you will explore the powerful combination of Cisco Cloud Native Application Observability (CNAO), application performance monitoring (APM), infrastructure, and machine instrumentation. Discover how these cutting-edge technologies seamlessly collaborate to deliver end-to-end visibility and help ensure optimal performance for your cloud-native applications. Gain valuable insights into how CNAO enhances application efficiency and learn how Cisco intelligently monitor and optimize your entire application stack. By the end of this tutorial, you will have a deep understanding of how CNAO, along with APM, infrastructure, and machine instrumentation, work harmoniously to unlock the full potential of your cloud-native applications.

What You’ll Learn

What You’ll Need

Some Tips to Remember

  1. First, we need to install eksctl and kubectl. We will use these two programs to configure and interact with our Kubernetes clusters. AWS has a well-written guide to follow at the following URL. I will highlight the steps I took to get the two programs installed onto my macOS device.
  1. Using the provided commands, I ran the following to download the binaries and to make them available in my path directory.
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.26.2/2023-03-17/bin/darwin/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
aleccham@ALECCHAM-M-6D7G ~ % kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.12-eks-ec5523e", GitCommit:"3939bb9475d7f05c8b7b058eadbe679e6c9b5e2e", GitTreeState:"clean", BuildDate:"2023-03-20T21:30:46Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
aleccham@ALECCHAM-M-6D7G ~ %
  1. Now we can install eksctl. Following the guide provided on eksctl.io, we can quickly install the needed tool onto our macOS system. I use Homebrew for my package manager on macOS. Feel free to use the package manager for your system.
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
  1. We can validate that the installation proceeded as planned.
aleccham@ALECCHAM-M-6D7G ~ % eksctl version
0.131.0-dev+d4917e5d1.2023-02-23T12:47:39Z
aleccham@ALECCHAM-M-6D7G ~ %
  1. Deploying an EKS Kubernetes cluster can be done very quickly using one command. I have made some modifications to the command. For example, I am:
eksctl create cluster --name cvf-k8-cloud --region us-east-1 --node-type m5.large --nodes 6 --nodes-min 6 --nodes-max 7 --ssh-access --ssh-public-key=aleccham --managed
  1. Running this command will take some time, but at its completion, you will have a fully working Kubernetes cluster from which you can deploy the application that we will be monitoring using CNAO. The following output shows a successfully deployed EKS Kubernetes cluster:
aleccham@ALECCHAM-M-6D7G cappd % eksctl create cluster --name cvf-k8-cloud --region us-east-1 --node-type m5.large --nodes 6 --nodes-min 6 --nodes-max 7 --ssh-access --ssh-public-key=aleccham --managed
2023-04-24 13:08:02 []  eksctl version 0.131.0-dev+d4917e5d1.2023-02-23T12:47:39Z
2023-04-24 13:08:02 []  using region us-east-1
2023-04-24 13:08:03 []  skipping us-east-1e from selection because it doesn't support the following instance type(s): m5.large
2023-04-24 13:08:03 [ℹ]  setting availability zones to [us-east-1a us-east-1d]
2023-04-24 13:08:03 [ℹ]  subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2023-04-24 13:08:03 [ℹ]  subnets for us-east-1d - public:192.168.32.0/19 private:192.168.96.0/19
2023-04-24 13:08:03 [ℹ]  nodegroup "ng-50be5651" will use "" [AmazonLinux2/1.24]
2023-04-24 13:08:03 [ℹ]  using EC2 key pair %!q(*string=<nil>)
2023-04-24 13:08:03 [ℹ]  using Kubernetes version 1.24
2023-04-24 13:08:03 [ℹ]  creating EKS cluster "cvf-k8-cloud" in "us-east-1" region with managed nodes
2023-04-24 13:08:03 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2023-04-24 13:08:03 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=cvf-k8-cloud'
2023-04-24 13:08:03 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cvf-k8-cloud" in "us-east-1"
2023-04-24 13:08:03 [ℹ]  CloudWatch logging will not be enabled for cluster "cvf-k8-cloud" in "us-east-1"
2023-04-24 13:08:03 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=cvf-k8-cloud'
2023-04-24 13:08:03 [ℹ]
2 sequential tasks: { create cluster control plane "cvf-k8-cloud",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-50be5651",
    }
}
2023-04-24 13:08:03 [ℹ]  building cluster stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:08:04 [ℹ]  deploying stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:08:34 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:09:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:10:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:11:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:12:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:13:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:14:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:15:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:16:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:17:05 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:18:05 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-cluster"
2023-04-24 13:20:06 [ℹ]  building managed nodegroup stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:20:08 [ℹ]  deploying stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:20:08 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:20:38 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:21:20 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:22:09 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:23:14 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:24:56 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k8-cloud-nodegroup-ng-50be5651"
2023-04-24 13:24:56 [ℹ]  waiting for the control plane to become ready
2023-04-24 13:24:56 [✔]  saved kubeconfig as "/Users/aleccham/.kube/config"
2023-04-24 13:24:56 [ℹ]  no tasks
2023-04-24 13:24:56 [✔]  all EKS cluster resources for "cvf-k8-cloud" have been created
2023-04-24 13:24:56 [ℹ]  nodegroup "ng-50be5651" has 6 node(s)
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-13-41.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-22-83.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-37-7.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-54-65.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-41-245.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  waiting for at least 4 node(s) to become ready in "ng-50be5651"
2023-04-24 13:24:56 [ℹ]  nodegroup "ng-50be5651" has 6 node(s)
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-13-41.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-22-83.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-37-7.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-54-65.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-41-245.ec2.internal" is ready
2023-04-24 13:24:57 [ℹ]  kubectl command should work with "/Users/aleccham/.kube/config", try 'kubectl get nodes'
2023-04-24 13:24:57 []  EKS cluster "cvf-k8-cloud" in "us-east-1" region is ready
aleccham@ALECCHAM-M-6D7G cluster-agent-rhel-bundled-distribution %
  1. Validate that you have a working Kubernetes cluster by checking the node status of the EKS cluster. You want to see that all your nodes have come to the Ready status.
aleccham@ALECCHAM-M-6D7G cappd % kubectl get nodes
NAME                            STATUS   ROLES    AGE   VERSION
ip-192-168-13-41.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-22-83.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-37-7.ec2.internal    Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-54-65.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-36-69.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-41-245.ec2.internal  Ready    <none>  12m    v1.24.11-eks-0a21954
aleccham@ALECCHAM-M-6D7G cappd %
  1. To establish a secure connection between the CNAO and K8s cluster, we can either generate custom certificates or deploy cert-manager in Kubernetes to manage the certificates. Opting for cert-manager simplifies the process, so we will choose it for this deployment. To install cert-manager, execute the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
aleccham@ALECCHAM-M-6D7G cappd % kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
aleccham@ALECCHAM-M-6D7G cappd %

The content of this topic was taken from the CNAO Installation Guide and modified for my environment.

  1. Install Helm.
brew install helm
  1. Delete all previous credentials.
kubectl get crds
kubectl delete crds <crd-names>
  1. Add the Helm chart from the Cisco DevNet Repo.
helm repo add appdynamics-cloud-helmcharts https://appdynamics.jfrog.io/artifactory/appdynamics-cloud-helmcharts/
  1. Create your AppDynamics namespace for the agents to run.
kubectl create ns appdynamics
  1. Navigate to your CNAO instance GUI and click the Configure gear icon. Then, click Kubernetes and APM so that we can use the CNAO Kubernetes file wizard.

Generating the yaml files

  1. Download or copy/write the collectors-value.yaml and operators-values.yaml files from the CNAO GUI and place them into your desired install directory. Once placed into the directory, we can run the Helm installation command to deploy the containers into our Kubernetes cluster.

Downloading the 2 yaml

aleccham@ALECCHAM-M-6D7G cappd % cat collectors-values.yaml
global:
   clusterName: cvf-cloud
appdynamics-otel-collector:
   clientId: agt_mnGwpcKeyln9i2ifKp7Sv
   clientSecret: yQy-r3vt888XkzzrNkA1O3ADINoEmLGyLPwpo3H1aEU
   endpoint: https://cisco-cvd.observe.appdynamics.com/data
   tokenUrl: https://cisco-cvd.observe.appdynamics.com/auth/a17cdaca-ae90-40a0-b3ac-636a1e9844f2/default/oauth2/token%
aleccham@ALECCHAM-M-6D7G cappd %
aleccham@ALECCHAM-M-6D7G cappd % cat operators-values.yaml
global:
  clusterName: cvf-cloud
fso-agent-mgmt-client:
  solution:
    endpoint: https://cisco-cvd.observe.appdynamics.com
  oauth:
    clientId: agt_27IgRvrTsmenwM6fql8mEN
    clientSecret: NRQ3vjfZ58SdQG_xWjcGeHlkIV0xLPkvjznrlBQmoSA
    tokenUrl: https://cisco-cvd.observe.appdynamics.com/auth/a17cdaca-ae90-40a0-b3ac-636a1e9844f2/default/oauth2/token
    tenantId: a17cdaca-ae90-40a0-b3ac-636a1e9844f2%
aleccham@ALECCHAM-M-6D7G cappd %
  1. Issue the following command to install the CNAO operators into the Kubernetes cluster:
aleccham@ALECCHAM-M-6D7G cappd % helm install appdynamics-operators appdynamics-cloud-helmcharts/appdynamics-operators -n appdynamics -f operators-values.yaml --wait
NAME: appdynamics-operators
LAST DEPLOYED: Mon Jul 10 12:07:24 2023
NAMESPACE: appdynamics
STATUS: deployed
REVISION: 1
NOTES:
CHART NAME: appdynamics-operators
CHART VERSION: 1.11.124
APP VERSION: 1.11.124

** Please be patient while the chart is being deployed **

The chart installs the following components

1) OpenTelemetry Operator

2) AppDynamics Cloud Operator
	Description: Uses custom resources to manage the life cycle of Cluster Collector, Infrastructure Collector and Log Collector.


THIRD PARTY LICENSE DISCLOSURE
===============================

AppDynamics Cloud Operator
--------------------------------------------------
https://www.cisco.com/c/dam/en_us/about/doing_business/open_source/docs/AppDynamics_Cloud_Operator-2350-1684848502.pdf
aleccham@ALECCHAM-M-6D7G cappd %
  1. Now we can deploy the CNAO collector, using the following command:
aleccham@ALECCHAM-M-6D7G cappd % helm install appdynamics-collectors appdynamics-cloud-helmcharts/appdynamics-collectors -n appdynamics -f collectors-values.yaml

NAME: appdynamics-collectors
LAST DEPLOYED: Mon Jul 10 12:08:29 2023
NAMESPACE: appdynamics
STATUS: deployed
REVISION: 1
NOTES:
CHART NAME: appdynamics-collectors
CHART VERSION: 1.10.540
APP VERSION: 1.10.540

** Please be patient while the chart is being deployed **

The chart installs the following components

1) AppDynamics OpenTelemetry Collector

2) AppDynamics Cloud Infrastructure Collector
	Enabled: true
	Description: Installs the Server Collector and Container Collector to monitor the host and container metrics

3) AppDynamics Cloud Cluster Collector
	Enabled: true
	Description: Installs the Cluster Collector to monitor the kubernetes metrics and events

4) AppDynamics Cloud Log Collector
	Enabled: false
	Description: Installs the Log Collector to collect the logs from applications running in kubernetes cluster

5) AppDynamics Cloud Database Collector
	Enabled: false
	Description: Installs the DB Collector to collect metrics and monitors the Databases specified in DbConfigs



THIRD PARTY LICENSE DISCLOSURE
===============================

AppDynamics OpenTelemetry Collector
--------------------------------------------------
https://www.cisco.com/c/dam/en_us/about/doing_business/open_source/docs/AppDynamics_Distribution_for_OpenTelemetry_Collector-2350-1684861566.pdf

AppDynamics Cloud Cluster Collector
--------------------------------------------------
https://www.cisco.com/c/dam/en_us/about/doing_business/open_source/docs/AppDynamics_Cloud_Clustermon-2350-1684903634.pdf

AppDynamics Cloud Infrastructure Collector
--------------------------------------------------
https://www.cisco.com/c/dam/en_us/about/doing_business/open_source/docs/Appdynamics_infraagent_levitate-2310-1674179892.pdf

AppDynamics Cloud Log Collector
----------------------------
https://www.cisco.com/c/dam/en_us/about/doing_business/open_source/docs/Appdynamics_Beats_Levitate-2340-1682055351.pdf

AppDynamics Database Collector
----------------------------
https://www.cisco.com/c/dam/en_us/about/doing_business/open_source/docs/AppdynamicsDBCollectorAgent-2330-1679376239.pdf
aleccham@ALECCHAM-M-6D7G cappd %
  1. We can validate that the Helm chart was deployed successfully by checking to see if we have pods being created in our AppDynamics namespace. We are looking for these pods to be shown as STATUS Running.
aleccham@ALECCHAM-M-6D7G cappd % kubectl get pods -n appdynamics
NAME                                                              READY   STATUS    RESTARTS   AGE
appdynamics-collectors-appdynamics-clustermon-75cbd56654-xq5kb    1/1     Running   0          4d
appdynamics-collectors-appdynamics-inframon-5q5hp                 1/1     Running   0          4d
appdynamics-collectors-appdynamics-inframon-dkkd5                 1/1     Running   0          4d
appdynamics-collectors-appdynamics-inframon-gtkml                 1/1     Running   0          4d
appdynamics-collectors-appdynamics-inframon-lzf7k                 1/1     Running   0          4d
appdynamics-collectors-appdynamics-inframon-r4lpt                 1/1     Running   0          4d
appdynamics-collectors-appdynamics-inframon-wwnc9                 1/1     Running   0          4d
appdynamics-collectors-appdynamics-otel-co-collector-2tnrd        1/1     Running   0          4d
appdynamics-collectors-appdynamics-otel-co-collector-2z57x        1/1     Running   0          4d
appdynamics-collectors-appdynamics-otel-co-collector-9xb84        1/1     Running   0          4d
appdynamics-collectors-appdynamics-otel-co-collector-q8qh6        1/1     Running   0          4d
appdynamics-collectors-appdynamics-otel-co-collector-qf8qg        1/1     Running   0          4d
appdynamics-collectors-appdynamics-otel-co-collector-zbn26        1/1     Running   0          4d
appdynamics-operators-appdynamics-cloud-operator-6687fb59cqkgxw   2/2     Running   0          4d
appdynamics-operators-opentelemetry-operator-79d5698d47-p2npk     2/2     Running   0          4d
aleccham@ALECCHAM-M-6D7G cappd %

Here, we’ll focus on the Helm chart for Kubernetes and App Service Monitoring, specifically highlighting its ability to install the OpenTelemetry Operator for Kubernetes. This operator supports auto-instrumentation, allowing you to effortlessly add language-specific OpenTelemetry agents to Kubernetes deployments without any code changes or adjustments to deployment specifications. Moreover, we’ll cover how to apply auto-instrumentation to specific namespaces and deployments within those namespaces through the use of required annotations.

  1. Gather the YAML file for creating the CNAO OpenTelemetry Exporter and create it as a file so that we can deploy it into the namespace that our application is going to live within.
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: auto-instrumentation
spec:
  exporter:
    endpoint: http://appdynamics-otel-collector-service.appdynamics.svc.cluster.local:4317
  propagators:
    - tracecontext
    - baggage
    - b3
  env:
    - name: OTEL_EXPORTER_OTLP_INSECURE
      value: "true"
    - name: OTEL_LOG_LEVEL
      value: "debug"
    - name: OTEL_TRACES_EXPORTER
      value: "otlp,logging"
  java:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest
    env:
      - name: OTEL_EXPORTER_OTLP_ENDPOINT
        value: "http://appdynamics-otel-collector-service.appdynamics.svc.cluster.local:4317"
      - name: OTEL_JAVAAGENT_DEBUG
        value: "true"
  nodejs:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:latest
    env:
      - name: OTEL_EXPORTER_OTLP_ENDPOINT
        value: "http://appdynamics-otel-collector-service.appdynamics.svc.cluster.local:4317"
  python:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.28b1
    env:
      - name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
        value: "http://appdynamics-otel-collector-service.appdynamics.svc.cluster.local:4318/v1/traces"
      - name: OTEL_TRACES_EXPORTER
        value: otlp_proto_http
      - name: OTEL_PYTHON_LOG_LEVEL
        value: "debug"
  dotnet:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:latest
    env:
      - name: OTEL_EXPORTER_OTLP_ENDPOINT
        value: "http://appdynamics-otel-collector-service.appdynamics.svc.cluster.local:4317"
  1. Next, let’s deploy the OpenTelemetry Collector, which needs to be deployed in the same namespace as our application.
kubectl apply -f auto-instru.yaml -n teastore
  1. By leveraging its cluster monitoring agent, CNAO offers an automatic installation feature for its APM agents within your enterprise applications. This capability helps ensure seamless integration and monitoring. If your application is developed using any of the following languages, it is supported for auto-instrumentation monitoring by CNAO:

The CNAO auto-instrumentation feature simplifies the process of enabling APM agents in applications built with these languages, allowing for efficient and comprehensive monitoring.

  1. I would like to express my gratitude to Julio Gomez (jgomez2), a Cisco employee, for developing the following application. All credit for this application goes to Julio, and I appreciate his generosity in sharing it with me for use in my demos. This application is written entirely in Java, making it an ideal candidate for monitoring through CNAO auto-instrumentation.

This application represents an e-commerce tea store that provides support for multiple users and facilitates payment submissions to a third-party credit card processing service. The communication between the microservices within the application is monitored by CNAO, enabling the creation of valuable business transaction data.

As the name implies, you can “purchase” all your black, green, white, and rooibos teas from the store. Unfortunately, we haven’t figured out our fulfillment process quite yet.

aleccham@ALECCHAM-M-6D7G teastore % cat teastore.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-db
  labels:
    app: teastore-db
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-db
      version: v1
  template:
    metadata:
      labels:
        app: teastore-db
        version: v1
    spec:
      containers:
        - name: teastore-db
          image: descartesresearch/teastore-db
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-db
  labels:
    app: teastore-db
    service: teastore-db
spec:
  ports:
    - port: 3306
      protocol: TCP
  selector:
    app: teastore-db
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-registry
  labels:
    app: teastore-registry
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-registry
      version: v1
  template:
    metadata:
      labels:
        app: teastore-registry
        version: v1
    spec:
      containers:
        - name: teastore-registry
          image: brownkw/teastore-registry
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-registry
  labels:
    app: teastore-registry
    service: teastore-registry
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-persistence
  labels:
    framework: java
    app: teastore-persistence
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: teastore-persistence
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-persistence
        version: v1
    spec:
      containers:
        - name: teastore-persistence
          image: brownkw/teastore-persistence
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-persistence"
            - name: REGISTRY_HOST
              value: "teastore-registry"
            - name: DB_HOST
              value: "teastore-db"
            - name: DB_PORT
              value: "3306"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-persistence
  labels:
    app: teastore-persistence
    service: teastore-persistence
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-persistence
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-auth
  labels:
    framework: java
    app: teastore-auth
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-auth
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-auth
        version: v1
    spec:
      containers:
        - name: teastore-auth
          image: brownkw/teastore-auth
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-auth"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-auth
  labels:
    app: teastore-auth
    service: teastore-auth
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-auth
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-webui-v1
  labels:
    framework: java
    app: teastore-webui
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: teastore-webui
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-webui
        version: v1
    spec:
      containers:
        - name: teastore-webui-v1
          image: brownkw/teastore-webui
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 8080
          env:
            - name: HOST_NAME
              value: "teastore-webui"
            - name: REGISTRY_HOST
              value: "teastore-registry"
            - name: PROCESS_PAYMENT
              value: "true"
            - name: VISA_URL
              value: "https://fso-payment-gw-sim.azurewebsites.net/api/payment"
            - name: MASTERCARD_URL
              value: "https://fso-payment-gw-sim.azurewebsites.net/api/payment"
            - name: AMEX_URL
              value: "https://amex-fso-payment-gw-sim.azurewebsites.net/api/payment"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-webui
  labels:
    app: teastore-webui
    service: teastore-webui
spec:
  type: LoadBalancer
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  selector:
    app: teastore-webui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-recommender
  labels:
    framework: java
    app: teastore-recommender
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-recommender
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-recommender
        version: v1
    spec:
      containers:
        - name: teastore-recommender
          image: brownkw/teastore-recommender
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-recommender"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-recommender
  labels:
    app: teastore-recommender
    service: teastore-recommender
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-recommender
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-image-v1
  labels:
    framework: java
    app: teastore-image
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-image
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-image
        version: v1
    spec:
      containers:
        - name: teastore-image-v1
          image: brownkw/teastore-image
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-image"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-image
  labels:
    app: teastore-image
    service: teastore-image
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-image
  1. To deploy the application into a designated namespace named teastore, you must first create the namespace using the provided command. Once the namespace is created, you can proceed with deploying the application.
aleccham@ALECCHAM-M-6D7G teastore % kubectl create ns teastore
namespace/teastore created
aleccham@ALECCHAM-M-6D7G teastore %
  1. To ensure that the OpenTelemetry Operator can effectively instrument the appropriate APM agent, it is essential to annotate the Kubernetes namespace correctly. Annotating the namespace allows for proper identification and integration of the APM agent with the desired components. To accomplish this, you can utilize the following command, which facilitates the annotation process:
aleccham@ALECCHAM-M-6D7G cappd % kubectl annotate ns teastore instrumentation.opentelemetry.io/inject-java="true" --all
namespace/teastore annotated
aleccham@ALECCHAM-M-6D7G cappd %
aleccham@ALECCHAM-M-6D7G teastore % kubectl create -f teastore.yml -n teastore
deployment.apps/teastore-db created
service/teastore-db created
deployment.apps/teastore-registry created
service/teastore-registry created
deployment.apps/teastore-persistence created
service/teastore-persistence created
deployment.apps/teastore-auth created
service/teastore-auth created
deployment.apps/teastore-webui-v1 created
service/teastore-webui created
deployment.apps/teastore-recommender created
service/teastore-recommender created
deployment.apps/teastore-image-v1 created
service/teastore-image created
aleccham@ALECCHAM-M-6D7G teastore %
  1. We can validate that the application is launched by checking to see that the pods are running.
aleccham@ALECCHAM-M-6D7G teastore % kubectl get pods -n teastore
NAME                                    READY   STATUS    RESTARTS   AGE
teastore-auth-5945b564ff-pncxg          1/1     Running   0          32m
teastore-db-b65d686cb-rkxbq             1/1     Running   0          32m
teastore-image-v1-6f474cbb5f-cdbrg      1/1     Running   0          32m
teastore-persistence-645dd7d8b8-bmq5f   1/1     Running   0          32m
teastore-recommender-7d5d79bb7f-dcbsr   1/1     Running   0          32m
teastore-registry-8574746b8-srvz4       1/1     Running   0          32m
teastore-webui-v1-85b9595876-xwlvz      1/1     Running   0          32m
aleccham@ALECCHAM-M-6D7G teastore %
  1. Be sure to note the URL of the external AWS LoadBalancer service so that we can navigate to the website and generate some application data later in the tutorial. It is the only entry for EXTERNAL-IP. You will need to append port 8080 to the end of the URL to access the WebUI.
aleccham@ALECCHAM-M-6D7G teastore % kubectl get svc -n teastore
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
teastore-auth          ClusterIP      10.100.111.64    <none>                                                                    8080/TCP         63m
teastore-db            ClusterIP      10.100.199.51    <none>                                                                    3306/TCP         63m
teastore-image         ClusterIP      10.100.253.18    <none>                                                                    8080/TCP         63m
teastore-persistence   ClusterIP      10.100.84.228    <none>                                                                    8080/TCP         63m
teastore-recommender   ClusterIP      10.100.67.126    <none>                                                                    8080/TCP         63m
teastore-registry      ClusterIP      10.100.85.212    <none>                                                                    8080/TCP         63m
teastore-webui         LoadBalancer   10.100.218.163   ad31ad3bc12554a4c892f71095ee7f97-1457516564.us-east-1.elb.amazonaws.com   8080:30528/TCP   63m
aleccham@ALECCHAM-M-6D7G teastore %
  1. Remember, in a previous step, we gathered EXTERNAL-IP for the tea store application. Let’s navigate to the provided URL and generate some data so that we will see business transaction data in the CNAO GUI.
aleccham@ALECCHAM-M-6D7G cappd % kubectl get svc -n teastore
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
teastore-auth          ClusterIP      10.100.163.55    <none>                                                                    8080/TCP         4d
teastore-db            ClusterIP      10.100.179.75    <none>                                                                    3306/TCP         4d
teastore-image         ClusterIP      10.100.145.219   <none>                                                                    8080/TCP         4d
teastore-persistence   ClusterIP      10.100.251.167   <none>                                                                    8080/TCP         4d
teastore-recommender   ClusterIP      10.100.159.157   <none>                                                                    8080/TCP         4d
teastore-registry      ClusterIP      10.100.238.202   <none>                                                                    8080/TCP         4d
teastore-webui         LoadBalancer   10.100.186.69    a6de0ed56211c42d7b599b4989bff86c-2126296795.us-east-1.elb.amazonaws.com   8080:32294/TCP   4d
aleccham@ALECCHAM-M-6D7G cappd %
  1. To generate data, simply make purchases within the store. There should be a user saved with payment information. I will not be walking through how to make purchases in the web store. I hope that as engineers, you can figure out this one on your own. CNAO will not receive any data unless you interact with the web app. CNAO also expects there to be a decent load on the application for it to report data, which means that you will need to continually make purchases in the store for data to appear in the CNAO GUI. You will need to make a large number of purchases; 20 is a good number.

WebUI of Teastore

  1. Another method to validate that CNAO auto-instrumentation has been configured is to describe the tea store pod and look for OpenTelemetry information in the output. You can see that the Environment section contains information pertaining to our CNAO configuration.
aleccham@ALECCHAM-M-6D7G cappd % kubectl describe pod teastore-auth-646776f7bd-b4zhm -n teastore
Name:             teastore-auth-646776f7bd-b4zhm
Namespace:        teastore
Priority:         0
Service Account:  default
Node:             ip-192-168-0-101.ec2.internal/192.168.0.101
Start Time:       Thu, 06 Jul 2023 11:11:58 -0400
Labels:           app=teastore-auth
                  framework=java
                  pod-template-hash=646776f7bd
                  version=v1
Annotations:      <none>
Status:           Running
IP:               192.168.31.172
IPs:
  IP:           192.168.31.172
Controlled By:  ReplicaSet/teastore-auth-646776f7bd
Init Containers:
  opentelemetry-auto-instrumentation:
    Container ID:  containerd://9c9ce9e874b9f1ab7c6f93a6d479cd232c15350cf7f0c94133892f8f384dd686
    Image:         ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest
    Image ID:      ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java@sha256:fb4d8cf6f984ed80ccc3865ceb65e94c4c565003b550d08010e13d8fe1e82c3e
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
      /javaagent.jar
      /otel-auto-instrumentation/javaagent.jar
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 06 Jul 2023 11:12:00 -0400
      Finished:     Thu, 06 Jul 2023 11:12:00 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /otel-auto-instrumentation from opentelemetry-auto-instrumentation (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxlrf (ro)
Containers:
  teastore-auth:
    Container ID:   containerd://60647485e16ad084c211b5f6ff3bf6fe0c04a6dc6392e198d2b14945632302e9
    Image:          brownkw/teastore-auth
    Image ID:       docker.io/brownkw/teastore-auth@sha256:0fc5c6c7154946ad943aaa7d4a805f2622783e03c8b2340ae39c4cfe90fc79bd
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 06 Jul 2023 11:12:08 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      HOST_NAME:                           teastore-auth
      REGISTRY_HOST:                       teastore-registry
      OTEL_EXPORTER_OTLP_ENDPOINT:         http://appdynamics-otel-collector-service.appdynamics.svc.cluster.local:4317
      OTEL_JAVAAGENT_DEBUG:                true
      JAVA_TOOL_OPTIONS:                    -javaagent:/otel-auto-instrumentation/javaagent.jar
      OTEL_EXPORTER_OTLP_INSECURE:         true
      OTEL_LOG_LEVEL:                      debug
      OTEL_TRACES_EXPORTER:                otlp,logging
      OTEL_SERVICE_NAME:                   teastore-auth
      OTEL_RESOURCE_ATTRIBUTES_POD_NAME:   teastore-auth-646776f7bd-b4zhm (v1:metadata.name)
      OTEL_RESOURCE_ATTRIBUTES_NODE_NAME:   (v1:spec.nodeName)
      OTEL_PROPAGATORS:                    tracecontext,baggage,b3
      OTEL_RESOURCE_ATTRIBUTES:            k8s.container.name=teastore-auth,k8s.deployment.name=teastore-auth,k8s.namespace.name=teastore,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=teastore-auth-646776f7bd
    Mounts:
      /otel-auto-instrumentation from opentelemetry-auto-instrumentation (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxlrf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-gxlrf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  opentelemetry-auto-instrumentation:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:   <unset>
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>
aleccham@ALECCHAM-M-6D7G cappd %
  1. After spending some time to generate the application load, we should see data pertaining to the tea store application and its services requests between the microservices in the CNAO GUI. You will need to double-click to navigate to the data pertaining to a specific CNAO service, which will show application data.

Main CNAO Dashboard

Double Clicking on the Services Box

Diving into the WebUI Service Performance

Congratulations! You’ve completed our tutorial on CNAO APM, infrastructure, and machine auto-instrumentation. We hope that this tutorial has provided you with a solid understanding of these tools and how they work together to optimize application performance. Remember that this is just the beginning of your journey, and there’s always more to learn. Keep exploring and experimenting with these tools, and don’t hesitate to reach out to us with any questions or feedback. We wish you all the best in your continued learning and growth!

Learn More