https://github.com/CiscoLearning/panoptica-tutorial-sample-code/. This repository will contain all of the required Terraform for building the AWS EC2 instance, as well as files that will be needed for the completiion of the tutorial.
Once the code has been cloned, you’ll need to navigate to the panoptica-tutorial-sample-code/terraform/terraform.tfvars
file and add in the required information that we acquired in the previous step (AWS IAM keys, AWS EC2 keypair name, and EC2 keypair location).
Once this file has been modified to reflect your information, the final piece will be ensuring that the region that you have created the EC2 key pair in (and which region you decided to use). The region location is declared at the top of the panoptica-main.tf
file.
Once these changes have been made move your terminal into the panoptica-tutorial-sample-code/terraform
folder, and perform a terraform init
followed by a terraform apply -auto-approve
(we’re skipping the plan step as we know what will be created). This process should only take a couple of minutes, and at the end, you should have a successfully built AWS VM and the output at the end of the Terraform run should indicate the FQDN of your new AWS EC2 instance, which has been fully updated and had Docker, kubectl
, kind
, caddy
, k9s
, and terraform
installed, as well as had the copy of files in the panoptica-tutorial-sample-code/terraform
folder copied over in .tar.gz
format and decompressed into the ~/panoptica
directory of the VM.
To access the VM, you’ll need to use the following command syntax: ssh -i "location of key pair file downloaded" ubuntu@fqdn.output.from.terraform
, which in my case would look something like ssh -i "~/.ssh/qms-us-east-1-key.pem" ubuntu@ec2-54-227-21-142.compute-1.amazonaws.com
. The FQDN will be written to the output of the terraform apply
command (as seen at the end of the GIF below), as well as saved as a file within the terraform
directory. Using the SSH command above should connect you directly to your AWS instance.
The goal of these first tasks is to deploy a Kubernetes demo cluster based on Kind (Kubernetes in Docker) that will be protected with Panoptica. The cluster includes one control-plane and two worker nodes. All files required for the rest of this tutorial should be located under the ~/panoptica
directory on the EC2 VM.
To instantiate the kind cluster, we will reference two files. The first is the ~/panoptica/lab/kind-config.yaml
file which defines the number of nodes (both control and worker) for the cluster, as well as any special configuration required (for our usage, we’ll just disable the default kindnet overlay networking stack in lieu of using Calico).
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
disableDefaultCNI: true # disable the default Kindnet CNI
podSubnet: 192.168.0.0/16 # set to Calico's default subnet
To deploy the kind cluster, there is a bash
script located in ~/panoptica/cluster
called cluster-build.sh
, which we will use to run kind
. The commands executed by the script are given below
kind create cluster --name demo --config kind-config.yaml --image="kindest/node:v1.23.10@sha256:f047448af6a656fae7bc909e2fab360c18c487ef3edc93f06d78cdfd864b2d12"
and we can run this script by typing bash ~/panoptica/cluster/cluster-build.sh
and hitting enter at the terminal window.
You should have an output similar to:
Creating cluster "demo" ...
✓ Ensuring node image (kindest/node:v1.23.10) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-demo"
You can now use your cluster with:
kubectl cluster-info --context kind-demo
Thanks for using kind! 😊
Check the cluster is properly installed and notice the coredns Pods are in Pending
state until a CNI is installed:
kubectl get pods -n kube-system
ubuntu@ip-172-31-7-10:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-64897985d-fjrh7 0/1 Pending 0 100s
coredns-64897985d-pbsrq 0/1 Pending 0 100s
etcd-demo-control-plane 1/1 Running 0 115s
kube-apiserver-demo-control-plane 1/1 Running 0 115s
kube-controller-manager-demo-control-plane 1/1 Running 0 117s
kube-proxy-8v2fm 1/1 Running 0 82s
kube-proxy-b9xxq 1/1 Running 0 100s
kube-proxy-zr5dk 1/1 Running 0 94s
kube-scheduler-demo-control-plane 1/1 Running 0 114s
Install the Tigera Calico operator and custom resource definitions as well as install Calico by creating the necessary custom resource.:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml
Wait for the Calico Pods to reach Ready
state:
sleep 5 && kubectl wait pods --all=True -n calico-system --for=condition=Ready --timeout=60s
ubuntu@ip-172-31-7-10:~$ sleep 5 && kubectl wait pods --all=True -n calico-system --for=condition=Ready --timeout=60s
pod/calico-kube-controllers-7d6749878f-m9427 condition met
pod/calico-node-7qnfz condition met
pod/calico-node-7s6f8 condition met
pod/calico-node-jqf5s condition met
pod/calico-typha-d779855d-lgmrp condition met
pod/calico-typha-d779855d-pcjrp condition met
pod/csi-node-driver-2hw2k condition met
pod/csi-node-driver-bpjsz condition met
pod/csi-node-driver-mbhtn condition met
Check the coredns Pods are Ready
and Running
:
kubectl get pods --namespace=kube-system
ubuntu@ip-172-31-7-10:~$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-64897985d-fjrh7 1/1 Running 0 23m
coredns-64897985d-pbsrq 1/1 Running 0 23m
etcd-demo-control-plane 1/1 Running 0 23m
kube-apiserver-demo-control-plane 1/1 Running 0 23m
kube-controller-manager-demo-control-plane 1/1 Running 0 23m
kube-proxy-8v2fm 1/1 Running 0 22m
kube-proxy-b9xxq 1/1 Running 0 23m
kube-proxy-zr5dk 1/1 Running 0 22m
kube-scheduler-demo-control-plane 1/1 Running 0 23m
In order to use Panoptica, you will need to create an account within the Panoptica portal. You will need to open a browser to the Panoptica site and click on Get Panoptica Free
to create a free account.
Click Sign up
option and provide the required information to create you free account, or if you already have one, use your existing account (Panoptica’s free tier can be deployed on a single cluster with up to 15 nodes).
Select whether to use your credentials with Google, Github, or Cisco to log in to Panoptica (using SSO), or whether to create unique credentials for Panoptica with an email and password. Click Sign up to continue the process. When the sign up process is complete, you will be re-directed to the Panoptica home page.
On subsequent log ins, select the SSO provider or enter your credentials directly to sign in, according to how you defined your account above.
Typically, you will receive an invitation email from Panoptica. Follow the link in the email to set up an account on Panoptica console.
With your account, you can deploy Panoptica agent on the lab environment Kubernetes cluster as well as manage the configuration of the Panoptica environment.
From the Panoptica dashboard, we will provision a new service user to generate access and secret keys, which will be used by Terraform to provision and manage Panoptica resources.
NOTE: We can manually deploy the Panoptica binaries to our cluster, however, by leveraging Terraform, we can make this process simple and repeatable.
From the Panoptica dashboard create a new service user named terraform-service
:
Copy/paste the access and secret keys into the appropriate spots in the ~/panoptica/terraform/terraform.tfvars
file (both vim
and nano
are installed within the EC2 instance).
Inside of the ~/panoptica/terraform/
folder, there are two other files that we will examine. The first is the variables file given below
variable "access_key" {
description = "Panoptica Access Key"
type = string
sensitive = true
}
variable "secret_key" {
description = "Panoptica Secret Key"
type = string
sensitive = true
}
variable "environment_name" {
description = "Name assigned to the environment"
type = string
default = "kind-demo"
}
// Run "kubectl config current-context"
variable "kubernetes_cluster_context_name" {
description = "Name of the Kubernetes cluster context used to connect to the API"
type = string
default = "kind-demo"
}
This file includes the name assigned to the kind cluster and the current kubernetes cluster context. None of these pieces will need to be edited.
The second file (~/panoptica/terraform/main.tf
) will do the deployment of the Panoptica binaries to the cluster with specific settings, including
main.tf
is given below.
terraform {
required_providers {
securecn = {
source = "Portshift/securecn"
version = ">= 1.1.0"
}
}
}
// Configure Panoptica provider with keys
provider "securecn" {
access_key = var.access_key
secret_key = var.secret_key
}
// Provision K8s cluster in Panoptica
resource "securecn_k8s_cluster" "cluster" {
kubernetes_cluster_context = var.kubernetes_cluster_context_name
name = var.environment_name
ci_image_validation = false
cd_pod_template = false
istio_already_installed = false
connections_control = true
multi_cluster_communication_support = false
inspect_incoming_cluster_connections = false
fail_close = false
persistent_storage = false
api_intelligence_dast = true
install_tracing_support = true
token_injection = true
}
There is one additional Terrafor file which we will analyze in the next step. In order to deploy Panoptica to your cluster, perform the following commands (include moving the environment-for-sock-shop.tf
)
cd ~/panoptica/terraform/
mv environment-for-sock-shop.tf ../environment-for-sock-shop.tf
terraform init
terraform apply -auto-approve
On the Panoptica dashboard, check the Cluster is added to the list of existing clusters.
Check the related environment is added to the list of existing environments.
To learn the API security capabilities of Panoptica we will deploy the application sock-shop.
It’s based on a micro-service architecture. Each micro-service communicates via REST APIs. Users can orders socks via the UI or REST APIs.
This version of the application is capable to make orders and consume external APIs on api-m.sandbox.paypal.com
You can deploy the controller on one or more namespaces in the cluster. Namespaces that have a controller deployed will be protected by Panoptica. We’ll begin by creating a namespace (a collection of resources within the cluster) and then labeling that namespace to indicate that it should be protected by Panoptica.
kubectl create namespace sock-shop
kubectl label namespace sock-shop SecureApplication-protected=full --overwrite
We’ll then need to move the environment-for-sock-shop.tf
file back into the terraform
directory and apply it. This terraform HCL file is given below:
resource "securecn_environment" "sock-shop-testing" {
name = "kind-demo-sock-shop-testing"
description = "testing environment"
kubernetes_environment {
cluster_name = securecn_k8s_cluster.cluster.name
namespaces_by_names = ["sock-shop"]
}
}
We’ll apply this configuration using the following commands
cd ~/panoptica/terraform/
mv ../environment-for-sock-shop.tf ./environment-for-sock-shop.tf
terraform apply -auto-approve
While not required, if we wish to use the APIClarity dashboard for troubleshooting, we need to expose this service (which is part of Panoptica) externally from the cluster. This will require applying a NodePort configuration to the APIClarity port within the cluster, exposing that outside of the cluster, and then proxying it within the local kind system.
The NodePort definition file can be found in the ~/panoptica/lab/apiclarity-nodeport.yaml
file and is defined below.
apiVersion: v1
kind: Service
metadata:
name: apiclarity-nodeport
labels:
app: apiclarity-nodeport
namespace: portshift
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30002
selector:
app: apiclarity
We can then use kubectl
to deploy the definition to the running cluster.
cd ~/panoptica/lab/
kubectl apply -f apiclarity-nodeport.yaml
Finally, we’ll expose the NodePort service outside of the lab environment on port 8081
using Caddy (a reverse proxy system)
caddy reverse-proxy --from :8081 --to 172.18.0.4:30002 > /dev/null 2>&1 &
You can get access to the APIClarity dashboard by accessing the FQDN of the AWS EC2 instance via HTTP on port 8081 (for example http://ec2-44-200-168-73.compute-1.amazonaws.com:8081
, which is the same FQDN that was output from the Terraform application and the same as what was used to initiate your SSH session). In case it doesn’t work first attempt, give a little bit of time (~30s) and reload the page.
In order to deploy the SockShop application, we’ll need to apply the configuration manifest. It is too large to include inline, but can be found at ~/panoptica/lab/sock-shop.yaml
. We will deploy the application to the previously created protected namespace called sock-shop
.
kubectl apply -f ~/panoptica/lab/sock-shop.yaml --namespace=sock-shop
You should have a similar output:
ubuntu@ip-172-31-95-18:$ kubectl apply -f ~/panoptica/lab/sock-shop.yaml --namespace=sock-shop
namespace/sock-shop created
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/carts created
service/carts created
deployment.apps/carts-db created
service/carts-db created
deployment.apps/catalogue created
service/catalogue created
deployment.apps/catalogue-db created
service/catalogue-db created
deployment.apps/front-end created
service/front-end created
deployment.apps/orders created
service/orders created
deployment.apps/orders-db created
service/orders-db created
service/payment created
deployment.apps/payment created
deployment.apps/payment-db created
service/payment-db created
deployment.apps/queue-master created
service/queue-master created
deployment.apps/rabbitmq created
service/rabbitmq created
deployment.apps/session-db created
service/session-db created
deployment.apps/shipping created
service/shipping created
service/user created
deployment.apps/user created
service/user-db created
deployment.apps/user-db created
Because the SockShop application already has a NodePort service exposing the webUI of the app outside of the cluster, we simply need to proxy this on our local host outside of the system. We can accomplish this by using the following command with caddy:
caddy reverse-proxy --from :8080 --to 172.18.0.4:30001 > /dev/null 2>&1 &
We’ll want to ensure that all pods within the sock-shop
namespace are deployed and ready. We can verify this using the following command:
watch kubectl get pods -n sock-shop
You should have an output similar to when application is ready:
Every 2.0s: kubectl get pods -n sock-shop
NAME READY STATUS RESTARTS AGE
carts-85f4d4b45-sj92n 3/3 Running 0 88s
carts-db-59578c5464-ckwtn 3/3 Running 0 88s
catalogue-bbdcf5467-2v4ks 3/3 Running 0 88s
catalogue-db-6bf8d6ff8f-ph62m 3/3 Running 0 88s
front-end-7b4dfc5669-6xsbg 3/3 Running 0 88s
orders-7f57779c86-ss6m2 3/3 Running 0 88s
orders-db-848c4c6db4-rtjnk 3/3 Running 0 88s
payment-cdf9694b4-gf6zx 3/3 Running 0 88s
payment-db-55cf8bcffb-ncfvx 3/3 Running 0 88s
queue-master-96b8bb948-mnp4p 3/3 Running 0 88s
rabbitmq-f5b858486-v5cwx 4/4 Running 0 87s
session-db-7b8844d8d5-xbxqr 3/3 Running 0 87s
shipping-797475685c-lzv5v 3/3 Running 0 87s
user-7c48486d5-244vz 3/3 Running 1 (75s ago) 87s
user-db-767cf48587-jj7zg 3/3 Running 0 87s
Enter CTL^C
to exit the command.
You can then check the access to the SockShop webUI by accessing the FQDN of the AWS instance on port 8080 (for example http://ec2-44-200-168-73.compute-1.amazonaws.com:8080
). It should look the same as the top image in on this page.
Finally, we’ll want to deploy a sample load on the application that will simulate users browsing and ordering products within the app. We’ll start by injecting a Python script into a configmap. This step has already been performed and saved, but is included for reference.
kubectl create configmap --dry-run=client user-traffic-load-configmap --from-file=~/panoptica/lab/user-traffic-load.py --output yaml | tee ~/panoptica/lab/user-traffic-load-configmap.yaml
Again, this step has already been done, and has been included in the user-traffic-load-configmap.yaml
file within the ~/panoptica/lab/
directory. Next, we’ll need to apply this configmap to the cluster and the appropriate namespace.
kubectl apply -f ~/panoptica/lab/user-traffic-load-configmap.yaml --namespace=sock-shop
Finally, we’ll need to deploy a pod (container) under the sock-shop namespace which will consume the configmap and simulate user traffic.
kubectl apply -f ~/panoptica/lab/user-traffic-load-deployment.yaml --namespace=sock-shop
Cisco Panoptica can securely manage tokens that you use to access your API sites. It stores them securely in a vault, and securely injects them into your workloads (as environment variables) when they are deployed in clusters.
Panoptica management of API tokens has these advantages:
In order to accomplish this, when you create a cluster, set the option API token injection to “yes”. In the main.tf
we used to deploy Panoptica, the attribute token_injection = true
was included when specifying the securecn_k8s_cluster
resource.
TOKENS
tab of the APIs
page, click New Token:paypal
/secret/data/paypal#paypaltoken
api-m.sandbox.paypal.com
Jul 31st 2023
(at the time of the writing)Click on TOKEN INJECTION
button, this opens a new Deployment Policy rule for Token Injection. The tokens in the rule will be injected into the workloads selected by the rule, as environment variables. Complete the following steps:
Don’t forget to apply the new policy:
You can verify your configuration under Policies
page and Deployment Rules
tab, the paypal
rules should be listed:
Run the following script to get the API_ACCESS_TOKEN
from Paypal API and add it as a secret in the bank-vaults:
cd ~/panoptica/lab/
chmod +x execute.sh
chmod +x clean.sh
./execute.sh
The script will also restart the Pods to allow for token injection. You should have an output similar to:
ubuntu@ip-172-31-95-18:~$ cd ~/panoptica/lab/
ubuntu@ip-172-31-95-18:~/panoptica/lab$ chmod +x execute.sh
ubuntu@ip-172-31-95-18:~/panoptica/lab$ chmod +x clean.sh
ubuntu@ip-172-31-95-18:~/panoptica/lab$ ./execute.sh
Success! Data deleted (if it existed) at: secret/paypal
Paypal access token: A21AAL-1ejtro28QI0Jqcgiq1iKpOippNr4zBJ9_UnO4W2OrCSpRlT3UzEfDxZhfcWfZRFFcKphKy0rMVLZiMWHjQLdHWiEuQ
Key Value
--- -----
created_time 2022-12-14T13:21:23.270567663Z
deletion_time n/a
destroyed false
version 1
Waiting for the policy to be effective ...
Restartng the relevant pod to allow for injection
pod "payment-cdf9694b4-n2r7t" deleted
pod "payment-db-55cf8bcffb-s7fpj" deleted
Waiting for user pod to be ready ...
User pod is now running.
payment-cdf9694b4-tds52
payment-db-55cf8bcffb-hltqp
Open the payment description and search for the injected vault
kubectl describe pod -n sock-shop -l app=payment | grep vault:/secret/data/paypal#paypaltoken
Expected output:
ubuntu@ip-172-31-95-18:~/panoptica/lab$ kubectl describe pod -n sock-shop -l app=payment | grep vault:/secret/data/paypal#paypaltoken
PAYPAL_ACCESS_TOKEN: vault:/secret/data/paypal#paypaltoken
Once the API token is successfully injected, the vulnerable application is completely functioning: it offers a catalogue, a cart, a payment and checkout functionality.
Verify in the navigator (of the Panoptica dashboard) all application services are known to Panoptica. Use Expand all button to observe the full mesh of application services.
The EXTERNAL APIS
tab under APIs page shows external APIs consumed by workloads in your cluster.
Click on an API to show more details.
In this section we will use Panoptica to do a static analysis of the application’s APIs based on its OpenAPI specification. API specifications can be either provided by the user or reconstructed by Panoptica.
Panoptica builds a schema based on the traffic. In a previous section we loaded an application with simulated user traffic. This will be enough for Panoptica to reconstruct a part of the application’s OpenAPI specification.
Take a look at the discovered internal APIs.
Each microservice uses APIs to communicate.
We will focus on only one - catalogue
. The same steps can repeated for the rest of the microservices.
Open the catalogue
microservice from the API inventory.
In the specs tab we have two available options:
We will choose the latter one.
The data collection duration depends on the user traffic bandwidth. In our case 2 minutes is enough. Ensure that you specify the cluster to collect the data from.
The data collection will last 2 minutes.
After the data collection, you can review the reconstructed spec.
Two endpoints are discovered:
Check both and approve.
Specify the latest OAS version - V3.
After the review approval, Panoptica builds an OpenAPI schema.
After the spec build, you can always analyze the swagger file. Under the specs tab, you will find an option to See on swagger.
The Swagger will open in a separate page.
Once the catalogue API schema is available, Panoptica automatically analyzes it. The results are available in the UI.
Under the risk findings you will find a new group called api-specification. Any schema-related risks are listed here. Take a look at the findings. Some require actions, others are related to the fact that the reconstructed schema is light.
NOTE: A reconstructed schema can be completed by increasing the data collection period or by manually injecting additional information.
The final step is to ensure that we destroy the EC2 instance so that we are not charged for unintentional usage. This is done by exiting the SSH session to the instance and performing a destroy action with Terraform.
exit
terraform destroy -auto-approve
Congratulations! You have successfully completed the tutorial exploring Panoptica and the security features it provides for your Kubernetes environment!