Welcome to this tutorial on AppDynamics APM, Network, and Machine auto-instrumentation. Throughout this tutorial, we’ll show you how these powerful tools work together to provide end-to-end visibility and help ensure optimal application performance. You’ll learn about the features and benefits of AppDynamics APM, how AppDynamics Network Visibility can help identify network performance issues, and how AppDynamics Machine Agent automatically monitors infrastructure. With hands-on activities and interactive exercises, you’ll gain a comprehensive understanding of how these tools work together to provide optimal user experience. Join us on this learning journey and become an expert in AppDynamics APM, Network, and Machine auto-instrumentation!

What You’ll Learn

What You’ll Need

Some Tips to Remember

  1. First, we need to install eksctl and kubectl. These will be the two programs that we use to configure and interact with our Kubernetes clusters. AWS has a well-written guide to follow at the following URL. I will highlight the steps I took to get the two programs installed onto my macOS device.
  1. Using the provided commands, I ran the following to download the binaries and to make them available in my path directory.
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.26.2/2023-03-17/bin/darwin/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
aleccham@ALECCHAM-M-6D7G ~ % kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:36:36Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.12-eks-ec5523e", GitCommit:"3939bb9475d7f05c8b7b058eadbe679e6c9b5e2e", GitTreeState:"clean", BuildDate:"2023-03-20T21:30:46Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
aleccham@ALECCHAM-M-6D7G ~ %
  1. Now we can install eksctl. Following the guide provided on eksctl.io, we can quickly install the needed tool onto our macOS system. I use brew for my package manager on macOS. Feel free to use the package manager for your system.
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
  1. We can validate that the installation proceeded as planned.
aleccham@ALECCHAM-M-6D7G ~ % eksctl version
0.131.0-dev+d4917e5d1.2023-02-23T12:47:39Z
aleccham@ALECCHAM-M-6D7G ~ %
  1. Deploying an EKS Kubernetes cluster can be done very quickly using one command. I have made some modifications to the command. For example, I am:
eksctl create cluster --name cvf-k8 --region us-east-1 --node-type m5.large --nodes 5 --nodes-min 5 --nodes-max 6 --ssh-access --ssh-public-key=aleccham --managed
  1. Running this command will take some time, but at its completion, you will have a fully working Kubernetes cluster from which you can use to deploy the application we will be monitoring using AppDynamics. The output below shows a successfully deployed EKS Kubernetes cluster.
aleccham@ALECCHAM-M-6D7G cluster-agent-rhel-bundled-distribution % eksctl create cluster --name cvf-k82 --region us-east-1 --node-type m5.large --nodes 4 --nodes-min 4 --nodes-max 5 --ssh-access --ssh-public-key=aleccham --managed
2023-04-24 13:08:02 [ℹ]  eksctl version 0.131.0-dev+d4917e5d1.2023-02-23T12:47:39Z
2023-04-24 13:08:02 [ℹ]  using region us-east-1
2023-04-24 13:08:03 [ℹ]  skipping us-east-1e from selection because it doesn't support the following instance type(s): m5.large
2023-04-24 13:08:03 [ℹ]  setting availability zones to [us-east-1a us-east-1d]
2023-04-24 13:08:03 [ℹ]  subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2023-04-24 13:08:03 [ℹ]  subnets for us-east-1d - public:192.168.32.0/19 private:192.168.96.0/19
2023-04-24 13:08:03 [ℹ]  nodegroup "ng-50be5651" will use "" [AmazonLinux2/1.24]
2023-04-24 13:08:03 [ℹ]  using EC2 key pair %!q(*string=<nil>)
2023-04-24 13:08:03 [ℹ]  using Kubernetes version 1.24
2023-04-24 13:08:03 [ℹ]  creating EKS cluster "cvf-k82" in "us-east-1" region with managed nodes
2023-04-24 13:08:03 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2023-04-24 13:08:03 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=cvf-k82'
2023-04-24 13:08:03 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "cvf-k82" in "us-east-1"
2023-04-24 13:08:03 [ℹ]  CloudWatch logging will not be enabled for cluster "cvf-k82" in "us-east-1"
2023-04-24 13:08:03 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=cvf-k82'
2023-04-24 13:08:03 [ℹ]
2 sequential tasks: { create cluster control plane "cvf-k82",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-50be5651",
    }
}
2023-04-24 13:08:03 [ℹ]  building cluster stack "eksctl-cvf-k82-cluster"
2023-04-24 13:08:04 [ℹ]  deploying stack "eksctl-cvf-k82-cluster"
2023-04-24 13:08:34 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:09:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:10:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:11:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:12:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:13:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:14:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:15:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:16:04 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:17:05 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:18:05 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-cluster"
2023-04-24 13:20:06 [ℹ]  building managed nodegroup stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:20:08 [ℹ]  deploying stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:20:08 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:20:38 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:21:20 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:22:09 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:23:14 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:24:56 [ℹ]  waiting for CloudFormation stack "eksctl-cvf-k82-nodegroup-ng-50be5651"
2023-04-24 13:24:56 [ℹ]  waiting for the control plane to become ready
2023-04-24 13:24:56 [✔]  saved kubeconfig as "/Users/aleccham/.kube/config"
2023-04-24 13:24:56 [ℹ]  no tasks
2023-04-24 13:24:56 [✔]  all EKS cluster resources for "cvf-k82" have been created
2023-04-24 13:24:56 [ℹ]  nodegroup "ng-50be5651" has 4 node(s)
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-13-41.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-22-83.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-37-7.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-54-65.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  waiting for at least 4 node(s) to become ready in "ng-50be5651"
2023-04-24 13:24:56 [ℹ]  nodegroup "ng-50be5651" has 4 node(s)
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-13-41.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-22-83.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-37-7.ec2.internal" is ready
2023-04-24 13:24:56 [ℹ]  node "ip-192-168-54-65.ec2.internal" is ready
2023-04-24 13:24:57 [ℹ]  kubectl command should work with "/Users/aleccham/.kube/config", try 'kubectl get nodes'
2023-04-24 13:24:57 [✔]  EKS cluster "cvf-k82" in "us-east-1" region is ready
aleccham@ALECCHAM-M-6D7G cluster-agent-rhel-bundled-distribution %
  1. Validate that you have a working Kubernetes cluster by checking the node status of the EKS cluster. You want to see that all of your nodes have come to the Ready status.
aleccham@ALECCHAM-M-6D7G cluster-agent-rhel-bundled-distribution % kubectl get n
odes
NAME                            STATUS   ROLES    AGE   VERSION
ip-192-168-13-41.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-22-83.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-37-7.ec2.internal    Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-54-65.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0
ip-192-168-36-69.ec2.internal   Ready    <none>   12m   v1.24.11-eks-a59e1f0

aleccham@ALECCHAM-M-6D7G cluster-agent-rhel-bundled-distribution %
  1. AppDynamics has the ability to automatically install its APM agents into your enterprise’s applications using its cluster monitoring agent. As long as your application is written in any of the following languages, then it should be supported for auto-instrumentation monitoring.

    • Java
    • .NET
    • Node.js
    • PHP
    • Python
    • Go
  2. The following application was written by a Cisco employee, Julio Gomez (jgomez2). All credit for the application goes to him, and I want to thank him for sharing it and allowing me to use it in my demos. The application is written entirely in Java, making it a perfect candidate for AppDynamics auto-instrumentation monitoring. The application is an e-commerce tea store that supports multiple users and submits payments to a third-party credit card processing service. The communication between the microservices is then monitored by AppD to create the business transaction data.

    As the name implies, you can “purchase” all your black, green, white, and rooibos teas from the store. Unfortunately, we haven’t figured out our fulfillment process quite yet.

aleccham@ALECCHAM-M-6D7G teastore % cat teastore.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-db
  labels:
    app: teastore-db
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-db
      version: v1
  template:
    metadata:
      labels:
        app: teastore-db
        version: v1
    spec:
      containers:
        - name: teastore-db
          image: descartesresearch/teastore-db
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-db
  labels:
    app: teastore-db
    service: teastore-db
spec:
  ports:
    - port: 3306
      protocol: TCP
  selector:
    app: teastore-db
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-registry
  labels:
    app: teastore-registry
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-registry
      version: v1
  template:
    metadata:
      labels:
        app: teastore-registry
        version: v1
    spec:
      containers:
        - name: teastore-registry
          image: brownkw/teastore-registry
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-registry
  labels:
    app: teastore-registry
    service: teastore-registry
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-persistence
  labels:
    framework: java
    app: teastore-persistence
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: teastore-persistence
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-persistence
        version: v1
    spec:
      containers:
        - name: teastore-persistence
          image: brownkw/teastore-persistence
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-persistence"
            - name: REGISTRY_HOST
              value: "teastore-registry"
            - name: DB_HOST
              value: "teastore-db"
            - name: DB_PORT
              value: "3306"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-persistence
  labels:
    app: teastore-persistence
    service: teastore-persistence
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-persistence
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-auth
  labels:
    framework: java
    app: teastore-auth
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-auth
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-auth
        version: v1
    spec:
      containers:
        - name: teastore-auth
          image: brownkw/teastore-auth
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-auth"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-auth
  labels:
    app: teastore-auth
    service: teastore-auth
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-auth
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-webui-v1
  labels:
    framework: java
    app: teastore-webui
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: teastore-webui
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-webui
        version: v1
    spec:
      containers:
        - name: teastore-webui-v1
          image: brownkw/teastore-webui
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          ports:
            - containerPort: 8080
          env:
            - name: HOST_NAME
              value: "teastore-webui"
            - name: REGISTRY_HOST
              value: "teastore-registry"
            - name: PROCESS_PAYMENT
              value: "true"
            - name: VISA_URL
              value: "https://fso-payment-gw-sim.azurewebsites.net/api/payment"
            - name: MASTERCARD_URL
              value: "https://fso-payment-gw-sim.azurewebsites.net/api/payment"
            - name: AMEX_URL
              value: "https://amex-fso-payment-gw-sim.azurewebsites.net/api/payment"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-webui
  labels:
    app: teastore-webui
    service: teastore-webui
spec:
  type: LoadBalancer
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  selector:
    app: teastore-webui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-recommender
  labels:
    framework: java
    app: teastore-recommender
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-recommender
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-recommender
        version: v1
    spec:
      containers:
        - name: teastore-recommender
          image: brownkw/teastore-recommender
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-recommender"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-recommender
  labels:
    app: teastore-recommender
    service: teastore-recommender
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-recommender
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: teastore-image-v1
  labels:
    framework: java
    app: teastore-image
    version: v1
spec:
  selector:
    matchLabels:
      app: teastore-image
      version: v1
  template:
    metadata:
      labels:
        framework: java
        app: teastore-image
        version: v1
    spec:
      containers:
        - name: teastore-image-v1
          image: brownkw/teastore-image
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 2Gi
              cpu: 1
          env:
            - name: HOST_NAME
              value: "teastore-image"
            - name: REGISTRY_HOST
              value: "teastore-registry"
---
apiVersion: v1
kind: Service
metadata:
  name: teastore-image
  labels:
    app: teastore-image
    service: teastore-image
spec:
  ports:
    - port: 8080
      protocol: TCP
  selector:
    app: teastore-image
  1. Deploy the application into a namespace called teastore. The namespace will need to be created using the following command, and then we can deploy the application.
aleccham@ALECCHAM-M-6D7G teastore % kubectl create ns teastore
namespace/teastore created
aleccham@ALECCHAM-M-6D7G teastore %
aleccham@ALECCHAM-M-6D7G teastore % kubectl create -f teastore.yml -n teastore
deployment.apps/teastore-db created
service/teastore-db created
deployment.apps/teastore-registry created
service/teastore-registry created
deployment.apps/teastore-persistence created
service/teastore-persistence created
deployment.apps/teastore-auth created
service/teastore-auth created
deployment.apps/teastore-webui-v1 created
service/teastore-webui created
deployment.apps/teastore-recommender created
service/teastore-recommender created
deployment.apps/teastore-image-v1 created
service/teastore-image created
aleccham@ALECCHAM-M-6D7G teastore %
  1. We can validate that the application is launched by checking to see that the pods are running.
aleccham@ALECCHAM-M-6D7G teastore % kubectl get pods -n teastore
NAME                                    READY   STATUS    RESTARTS   AGE
teastore-auth-5945b564ff-pncxg          1/1     Running   0          32m
teastore-db-b65d686cb-rkxbq             1/1     Running   0          32m
teastore-image-v1-6f474cbb5f-cdbrg      1/1     Running   0          32m
teastore-persistence-645dd7d8b8-bmq5f   1/1     Running   0          32m
teastore-recommender-7d5d79bb7f-dcbsr   1/1     Running   0          32m
teastore-registry-8574746b8-srvz4       1/1     Running   0          32m
teastore-webui-v1-85b9595876-xwlvz      1/1     Running   0          32m
aleccham@ALECCHAM-M-6D7G teastore %
  1. Be sure to note the URL of the external AWS LoadBalancer service so that we can navigate to the website and generate some application data later in the tutorial. It is the only entry for EXTERNAL-IP.
aleccham@ALECCHAM-M-6D7G teastore % kubectl get svc -n teastore
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
teastore-auth          ClusterIP      10.100.111.64    <none>                                                                    8080/TCP         63m
teastore-db            ClusterIP      10.100.199.51    <none>                                                                    3306/TCP         63m
teastore-image         ClusterIP      10.100.253.18    <none>                                                                    8080/TCP         63m
teastore-persistence   ClusterIP      10.100.84.228    <none>                                                                    8080/TCP         63m
teastore-recommender   ClusterIP      10.100.67.126    <none>                                                                    8080/TCP         63m
teastore-registry      ClusterIP      10.100.85.212    <none>                                                                    8080/TCP         63m
teastore-webui         LoadBalancer   10.100.218.163   ad31ad3bc12554a4c892f71095ee7f97-1457516564.us-east-1.elb.amazonaws.com   8080:30528/TCP   63m
aleccham@ALECCHAM-M-6D7G teastore %

The following section was taken from the AppDynamics Installation Guide and modified for my environment.

  1. Install Helm.
brew install helm
  1. Delete all previous credentials.
kubectl get crds
kubectl delete crds <crd-names>
  1. Add the Helm chart from the Cisco DevNet Repo.
helm repo add appdynamics-charts https://ciscodevnet.github.io/appdynamics-charts
  1. Create your AppDynamics namespace for the agents to run.
kubectl create ns appdynamics
  1. View the available values and see if you want to add any more configuration. This command is simply to show you what else is available to be configured via the Helm chart.
helm show values appdynamics-charts/cluster-agent
  1. Create your AppDynamics helm-values.yaml. You need to edit this for your environment—specifically, the controllerInfo section. These are values I used for my deployment.
aleccham@ALECCHAM-M-6D7G teastore % cat values.yaml
# To install Cluster Agent
installClusterAgent: true
installInfraViz: true

# AppDynamics controller info
controllerInfo:
  url: https://ciscocvd.saas.appdynamics.com:443
  account: account
  username: aleccham@cisco.com
  password: password
  accessKey: accessskey

# Cluster agent config
clusterAgent:
  nsToMonitor:
    - teastore
instrumentationConfig:
  enabled: true
  instrumentationMethod: Env
  nsToInstrumentRegex: teastore
  defaultAppName: teastore
  instrumentationRules:
    - namespaceRegex: teastore
      language: java
      labelMatch:
        - framework: java
      imageInfo:
          image: "docker.io/appdynamics/java-agent:latest"
          agentMountPath: /opt/appdynamics
          imagePullPolicy: Always

# InfraViz config
infraViz:
  nodeOS: "linux"
  enableMasters: true
  stdoutLogging: true
  enableContainerHostId: true
  enableServerViz: true

install:
  metrics-server: true

# Netviz config
netViz:
  enabled: true
  netVizPort: 3892
  1. Finally, we can deploy the Helm chart using the following command.
helm install -f ./values.yaml appdynamics appdynamics-charts/cluster-agent --namespace=appdynamics
aleccham@ALECCHAM-M-6D7G teastore % helm install -f values.yaml appdynamics appdynamics-charts/cluster-agent --namespace=appdynamics

NAME: appdynamics
LAST DEPLOYED: Mon Apr 24 16:48:10 2023
NAMESPACE: appdynamics
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Check HELM release status:

  $ helm status appdynamics -n appdynamics
  $ helm get appdynamics -n appdynamics

List operator, cluster agent and machine agent pods:

  $ kubectl get pods -n appdynamics

Release state:
  Install ClusterAgent: true
  Install InfraViz: true
  Controller URL: https://ciscocvd.saas.appdynamics.com:443
  Auto-Instrumentation enabled: true
  Installing metrics-server: true
  1. We can validate that the Helm chart was deployed successfully by checking to see if we have pods being created in our AppDynamics namespace. Again, we are looking for these pods to be shown as STATUS Running.
aleccham@ALECCHAM-M-6D7G ~ % kubectl get pods -n appdynamics
NAME                                                              READY   STATUS              RESTARTS   AGE
appdynamics-appdynamics-cluster-agent-appdynamics-57494485kk2tn   1/1     Running             0          4m21s
appdynamics-appdynamics-infraviz-8x4pz                            2/2     Running             0          4m21s
appdynamics-appdynamics-infraviz-kkrq6                            2/2     Running             0          4m21s
appdynamics-appdynamics-infraviz-rnmpp                            2/2     Running             0          4m21s
appdynamics-appdynamics-infraviz-s9699                            2/2     Running             0          4m21s
appdynamics-metrics-server-554f67f9c4-xdkvd                       1/1     Running             0          4m26s
appdynamics-operator-79556cbc75-lrlvg                             1/1     Running             0          4m26s
aleccham@ALECCHAM-M-6D7G ~ %
  1. Remember, in a previous step, we gathered the EXTERNAL-IP for the tea store application. Let’s navigate to the provided URL and generate some data so that we will see business transaction data in the AppD GUI.
aleccham@ALECCHAM-M-6D7G teastore % kubectl get svc -n teastore
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
teastore-auth          ClusterIP      10.100.111.64    <none>                                                                    8080/TCP         63m
teastore-db            ClusterIP      10.100.199.51    <none>                                                                    3306/TCP         63m
teastore-image         ClusterIP      10.100.253.18    <none>                                                                    8080/TCP         63m
teastore-persistence   ClusterIP      10.100.84.228    <none>                                                                    8080/TCP         63m
teastore-recommender   ClusterIP      10.100.67.126    <none>                                                                    8080/TCP         63m
teastore-registry      ClusterIP      10.100.85.212    <none>                                                                    8080/TCP         63m
teastore-webui         LoadBalancer   10.100.218.163   ad31ad3bc12554a4c892f71095ee7f97-1457516564.us-east-1.elb.amazonaws.com   8080:30528/TCP   63m
aleccham@ALECCHAM-M-6D7G teastore %
  1. To generate data, simply make purchases within the store. There should be a user saved with payment information. I will not be walking through how to make purchases in the web store. I hope that as engineers, you can figure this one out on your own. AppD will not receive any data unless you interact with the web app. AppD also expects there to be a decent load on the application for it to report data. This means that you will need to continually make purchases in the store for data to appear in the AppD GUI. You need to make a large number of purchases; 20 is a good number.

WebUI of Teastore

  1. Another method to validate that AppDynamics auto-instrumentation has been configured is to describe the tea store pod and look for AppD information in the output. You can see that Annotations and Environment contain information pertaining to our AppD configuration.
aleccham@ALECCHAM-M-6D7G teastore % kubectl describe pod teastore-webui-v1-6955f4857c-lxsdj -n teastore
Name:             teastore-webui-v1-6955f4857c-lxsdj
Namespace:        teastore
Priority:         0
Service Account:  default
Node:             ip-192-168-62-157.ec2.internal/192.168.62.157
Start Time:       Tue, 02 May 2023 13:54:28 -0400
Labels:           app=teastore-webui
                  appd=teastore
                  framework=java
                  pod-template-hash=6955f4857c
                  version=v1
Annotations:      APPD_DEPLOYMENT_NAME: teastore-webui-v1
                  APPD_INSTRUMENTED_CONTAINERS: teastore-webui-v1
                  APPD_POD_INSTRUMENTATION_STATE: Successful
                  APPD_teastore-webui-v1_APPNAME: teastore
                  APPD_teastore-webui-v1_NODEID: 21760
                  APPD_teastore-webui-v1_NODENAME: teastore-webui-v1--2
                  APPD_teastore-webui-v1_TIERID: 1550
                  APPD_teastore-webui-v1_TIERNAME: teastore-webui-v1
                  container.seccomp.security.alpha.kubernetes.io/appd-agent-attach-java: unconfined
                  kubernetes.io/psp: eks.privileged
Status:           Running
IP:               192.168.50.5
IPs:
  IP:           192.168.50.5
Controlled By:  ReplicaSet/teastore-webui-v1-6955f4857c
Init Containers:
  appd-agent-attach-java:
    Container ID:  containerd://395614c8d488aedb34a31a1862600e8ced222d498078fa772c0e9fbe6f1f48be
    Image:         docker.io/appdynamics/java-agent:latest
    Image ID:      docker.io/appdynamics/java-agent@sha256:35c39e41ae56acca85f77ac10fa97908e06902503e1659bb5740ec72b276baa7
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
      -r
      /opt/appdynamics/.
      /opt/appdynamics-java
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 02 May 2023 13:54:29 -0400
      Finished:     Tue, 02 May 2023 13:54:29 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  75M
    Requests:
      cpu:        100m
      memory:     50M
    Environment:  <none>
    Mounts:
      /opt/appdynamics-java from appd-agent-repo-java (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qt6t (ro)
Containers:
  teastore-webui-v1:
    Container ID:   containerd://be5a601e304f9c98a455f12ffeea103c7113f00d4277083a9facab630f95e492
    Image:          brownkw/teastore-webui
    Image ID:       docker.io/brownkw/teastore-webui@sha256:d54e7c992bae1cc35f64a888fca0c489cea1da042e0aa333837c4d59403e76a1
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 02 May 2023 13:54:30 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  2Gi
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment:
      APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY:      <set to the key 'controller-key' in secret 'cluster-agent-secret'>  Optional: false
      HOST_NAME:                                 teastore-webui
      REGISTRY_HOST:                             teastore-registry
      PROCESS_PAYMENT:                           true
      VISA_URL:                                  https://fso-payment-gw-sim.azurewebsites.net/api/payment
      MASTERCARD_URL:                            https://fso-payment-gw-sim.azurewebsites.net/api/payment
      AMEX_URL:                                  https://amex-fso-payment-gw-sim.azurewebsites.net/api/payment
      APPDYNAMICS_CONTROLLER_SSL_ENABLED:        true
      APPDYNAMICS_AGENT_ACCOUNT_NAME:            ciscocvd
      APPDYNAMICS_AGENT_APPLICATION_NAME:        teastore
      APPDYNAMICS_AGENT_TIER_NAME:               teastore-webui-v1
      APPDYNAMICS_AGENT_REUSE_NODE_NAME_PREFIX:  teastore-webui-v1
      JAVA_TOOL_OPTIONS:                          -Dappdynamics.agent.accountAccessKey=$(APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY) -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar
      APPDYNAMICS_CONTROLLER_HOST_NAME:          ciscocvd.saas.appdynamics.com
      APPDYNAMICS_CONTROLLER_PORT:               443
      APPDYNAMICS_NETVIZ_AGENT_HOST:              (v1:status.hostIP)
      APPDYNAMICS_NETVIZ_AGENT_PORT:             3892
    Mounts:
      /opt/appdynamics-java from appd-agent-repo-java (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qt6t (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  appd-agent-repo-java:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-4qt6t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  28m   default-scheduler  Successfully assigned teastore/teastore-webui-v1-6955f4857c-lxsdj to ip-192-168-62-157.ec2.internal
  Normal  Pulling    28m   kubelet            Pulling image "docker.io/appdynamics/java-agent:latest"
  Normal  Pulled     28m   kubelet            Successfully pulled image "docker.io/appdynamics/java-agent:latest" in 91.697553ms
  Normal  Created    28m   kubelet            Created container appd-agent-attach-java
  Normal  Started    28m   kubelet            Started container appd-agent-attach-java
  Normal  Pulling    28m   kubelet            Pulling image "brownkw/teastore-webui"
  Normal  Pulled     28m   kubelet            Successfully pulled image "brownkw/teastore-webui" in 134.664629ms
  Normal  Created    28m   kubelet            Created container teastore-webui-v1
  Normal  Started    28m   kubelet            Started container teastore-webui-v1
aleccham@ALECCHAM-M-6D7G teastore %
  1. After spending some time to generate the application load, we should see data pertaining to the tea store application and its business transactions between the microservices in the AppD GUI.

Main AppD Dashboard

  1. In order to check that Network Visibility monitoring has been instrumented correctly, we need to navigate to the Network Dashboard, which is located in the subtab next to the main Application Dashboard in the AppD GUI. You should see network information pertaining to the application microservices. If your dashboard looks similar to the screenshot below, then all is well.

AppD Network Visibility Dashboard

Congratulations! You’ve completed our tutorial on AppDynamics APM, Network, and Machine auto-instrumentation. We hope that this tutorial has provided you with a solid understanding of these tools and how they work together to optimize application performance. Remember that this is just the beginning of your journey, and there’s always more to learn. Keep exploring and experimenting with these tools, and don’t hesitate to reach out to us with any questions or feedback. We wish you all the best in your continued learning and growth!

Learn More