What You’ll Learn

What You’ll Need

AWS Instances and Cost

Prior to starting the lab, you’ll need to create an AWS account. This tutorial will depend on using a large AWS Elastic Compute Cloud (EC2) instance which does not qualify for the free tier within the AWS cloud. This instance, as configured and deployed in us-east-1 will bill at around $1.25/hr, which includes the instance and the additional root-block storage. This instance may bill differently in other regions, and will require a payment method on file for the instance to be able to be provisioned. We will be using Terraform to provision the instance, which will allow us to easily destroy the instance after we are done with the tutorial. Please remember that this lab will not be free using the AWS resources.

A Note About AWS Regions

AWS has many different regions in which their datacenters and cloud offerings are available. Generally speaking, it is advantageous (from a latency perspective) to choose a region that is geographically close to you. However, it is important to note that different services within the AWS console may either be “global” in scope (such as IAM, wherein changes in this service will affect all regions), or regional (like EC2, such that a keypair generated in one region cannot be used for another region). The scope of the service will be indicated in the upper right corner of the service window.

Global region

Local region

Adding a Payment Method

Once you are signed up with an AWS account, you will need to access the billing platform by typing “Billing” in the AWS Service search bar and selecting it. Once in the billing service, click on “Billing Preferences” and add a new payment method. By default, all major credit and debit cards are accepted, but please ensure that you delete the lab when complete using Terraform so that you will not be charged for usage beyond the exercises within this tutorial.

AWS Billing Search

AWS payment screen

Creating Access Keys

Once billing information has been input, you will need to create access keys to allow your local machine to provision the EC2 instance using Terraform. This is accomplished by searching for IAM in the AWS Service search bar and selecting it.

AWS IAM Search

When the IAM service page appears, click on “Users” on the left hand side of the window, which will bring up a list of the users within the account. Select your username, which will bring up a new page with various user information about your user. Select “Security Credentials” in the main window pane, and scroll down to “Access Keys” and click on “Create Access Key”.

AWS Access Key Page

In the resulting window, select “Other” followed by “Next”. In the next window, you’ll can set an optional description for the access key.

AWS Create Access Key

Once you click “Next”, you will be presented with the access key and secret key for your account, along with the option to download a CSV file of these credentials. Once you leave this screen, you will not be able to access the secret key. You may want to download the CSV file for safekeeping, but do not commit any of these credentials to a public repository; these credentials will allow anyone to access your account without your knowledge!

AWS Created Access Key

Creating an EC2 Key Pair

The final step of gathering the AWS pre-requisites will be to create a new EC2 key pair. You can do this by searching for EC2 in the AWS service search bar and selecting it.

AWS EC2 Search

From there, select “Key Pairs” in the left pane (its under Network and Security) and select “Create Key Pair”.

Creating EC2 Keypair

Provide a name, ensure that both “RSA” and “.pem” are selected, and then click on “Create key pair”.

EC2 key pair options

This will download the key pair to your local machine. Please do not delete this keypair, as it will be required to SSH to the cloud instance that will be stood up in EC2, which we will accomplish next.

Installing Terraform and Gathering Files

The EC2 instance will be provisioned in the AWS cloud using Terraform. By using Terraform, we can package the required files for deployment and ensure that every instance is created in a similar way. Additionally, Terraform will allow us to destroy all configured resources after we are done to ensure that we are not charged for usage outside of the lab exploration. In order to use Terraform, you’ll need to either install it using the instructions found on https://terraform.io/downloads for your operating system. If you choose to use the binary download, please download it and ensure that it resides in $PATH or the folder in which you plan on executing the Terraform files.

Next, you’ll need to download the code for this lab found here: https://github.com/CiscoLearning/panoptica-tutorial-sample-code/. This repository will contain all of the required Terraform for building the AWS EC2 instance, as well as files that will be needed for the completiion of the tutorial. Once the code has been cloned, you’ll need to navigate to the panoptica-tutorial-sample-code/terraform/terraform.tfvars file and add in the required information that we acquired in the previous step (AWS IAM keys, AWS EC2 keypair name, and EC2 keypair location).

AWS keys in terraform.tfvars

Once this file has been modified to reflect your information, the final piece will be ensuring that the region that you have created the EC2 key pair in (and which region you decided to use). The region location is declared at the top of the panoptica-main.tf file.

AWS region declaration in Terraform

Once these changes have been made move your terminal into the panoptica-tutorial-sample-code/terraform folder, and perform a terraform init followed by a terraform apply -auto-approve (we’re skipping the plan step as we know what will be created). This process should only take a couple of minutes, and at the end, you should have a successfully built AWS VM and the output at the end of the Terraform run should indicate the FQDN of your new AWS EC2 instance, which has been fully updated and had Docker, kubectl, kind, caddy, k9s, and terraform installed, as well as had the copy of files in the panoptica-tutorial-sample-code/terraform folder copied over in .tar.gz format and decompressed into the ~/panoptica directory of the VM.

To access the VM, you’ll need to use the following command syntax: ssh -i "location of key pair file downloaded" ubuntu@fqdn.output.from.terraform, which in my case would look something like ssh -i "~/.ssh/qms-us-east-1-key.pem" ubuntu@ec2-54-227-21-142.compute-1.amazonaws.com. The FQDN will be written to the output of the terraform apply command (as seen at the end of the GIF below), as well as saved as a file within the terraform directory. Using the SSH command above should connect you directly to your AWS instance.

GIF of AWS instance being created

The goal of these first tasks is to deploy a Kubernetes demo cluster based on Kind (Kubernetes in Docker) that will be protected with Panoptica. The cluster includes one control-plane and two worker nodes. All files required for the rest of this tutorial should be located under the ~/panoptica directory on the EC2 VM.

Create Kind cluster

To instantiate the kind cluster, we will reference two files. The first is the ~/panoptica/lab/kind-config.yaml file which defines the number of nodes (both control and worker) for the cluster, as well as any special configuration required (for our usage, we’ll just disable the default kindnet overlay networking stack in lieu of using Calico).

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
  disableDefaultCNI: true # disable the default Kindnet CNI
  podSubnet: 192.168.0.0/16 # set to Calico's default subnet

To deploy the kind cluster, there is a bash script located in ~/panoptica/cluster called cluster-build.sh, which we will use to run kind. The commands executed by the script are given below

kind create cluster --name demo --config kind-config.yaml --image="kindest/node:v1.23.10@sha256:f047448af6a656fae7bc909e2fab360c18c487ef3edc93f06d78cdfd864b2d12"

and we can run this script by typing bash ~/panoptica/cluster/cluster-build.sh and hitting enter at the terminal window.

You should have an output similar to:

Creating cluster "demo" ...
 ✓ Ensuring node image (kindest/node:v1.23.10) 🖼
 ✓ Preparing nodes 📦 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-demo"
You can now use your cluster with:

kubectl cluster-info --context kind-demo

Thanks for using kind! 😊

Check the cluster is properly installed and notice the coredns Pods are in Pending state until a CNI is installed:

kubectl get pods -n kube-system
ubuntu@ip-172-31-7-10:~$ kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-64897985d-fjrh7                      0/1     Pending   0          100s
coredns-64897985d-pbsrq                      0/1     Pending   0          100s
etcd-demo-control-plane                      1/1     Running   0          115s
kube-apiserver-demo-control-plane            1/1     Running   0          115s
kube-controller-manager-demo-control-plane   1/1     Running   0          117s
kube-proxy-8v2fm                             1/1     Running   0          82s
kube-proxy-b9xxq                             1/1     Running   0          100s
kube-proxy-zr5dk                             1/1     Running   0          94s
kube-scheduler-demo-control-plane            1/1     Running   0          114s

Deploy Calico CNI

Install the Tigera Calico operator and custom resource definitions as well as install Calico by creating the necessary custom resource.:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/custom-resources.yaml

Wait for the Calico Pods to reach Ready state:

sleep 5 && kubectl wait pods --all=True -n calico-system --for=condition=Ready --timeout=60s
ubuntu@ip-172-31-7-10:~$ sleep 5 && kubectl wait pods --all=True -n calico-system --for=condition=Ready --timeout=60s
pod/calico-kube-controllers-7d6749878f-m9427 condition met
pod/calico-node-7qnfz condition met
pod/calico-node-7s6f8 condition met
pod/calico-node-jqf5s condition met
pod/calico-typha-d779855d-lgmrp condition met
pod/calico-typha-d779855d-pcjrp condition met
pod/csi-node-driver-2hw2k condition met
pod/csi-node-driver-bpjsz condition met
pod/csi-node-driver-mbhtn condition met

Check the coredns Pods are Ready and Running:

kubectl get pods --namespace=kube-system
ubuntu@ip-172-31-7-10:~$ kubectl get pods --namespace=kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-64897985d-fjrh7                      1/1     Running   0          23m
coredns-64897985d-pbsrq                      1/1     Running   0          23m
etcd-demo-control-plane                      1/1     Running   0          23m
kube-apiserver-demo-control-plane            1/1     Running   0          23m
kube-controller-manager-demo-control-plane   1/1     Running   0          23m
kube-proxy-8v2fm                             1/1     Running   0          22m
kube-proxy-b9xxq                             1/1     Running   0          23m
kube-proxy-zr5dk                             1/1     Running   0          22m
kube-scheduler-demo-control-plane            1/1     Running   0          23m

Cluster instantiation

Create a Panoptica account

In order to use Panoptica, you will need to create an account within the Panoptica portal. You will need to open a browser to the Panoptica site and click on Get Panoptica Free to create a free account.

Panoptica website banner

Click Sign up option and provide the required information to create you free account, or if you already have one, use your existing account (Panoptica’s free tier can be deployed on a single cluster with up to 15 nodes).

Signup and login options

Select whether to use your credentials with Google, Github, or Cisco to log in to Panoptica (using SSO), or whether to create unique credentials for Panoptica with an email and password. Click Sign up to continue the process. When the sign up process is complete, you will be re-directed to the Panoptica home page.

On subsequent log ins, select the SSO provider or enter your credentials directly to sign in, according to how you defined your account above.

Typically, you will receive an invitation email from Panoptica. Follow the link in the email to set up an account on Panoptica console.

With your account, you can deploy Panoptica agent on the lab environment Kubernetes cluster as well as manage the configuration of the Panoptica environment.

Create service user and gather keys

From the Panoptica dashboard, we will provision a new service user to generate access and secret keys, which will be used by Terraform to provision and manage Panoptica resources.

NOTE: We can manually deploy the Panoptica binaries to our cluster, however, by leveraging Terraform, we can make this process simple and repeatable.

From the Panoptica dashboard create a new service user named terraform-service:

service-user

Panoptica keys

Copy/paste the access and secret keys into the appropriate spots in the ~/panoptica/terraform/terraform.tfvars file (both vim and nano are installed within the EC2 instance).

Panoptica tfvars

Deploy Panoptica Using Terraform

Inside of the ~/panoptica/terraform/ folder, there are two other files that we will examine. The first is the variables file given below

variable "access_key" {
  description = "Panoptica Access Key"
  type        = string
  sensitive   = true
}
variable "secret_key" {
  description = "Panoptica Secret Key"
  type        = string
  sensitive   = true
}
variable "environment_name" {
  description = "Name assigned to the environment"
  type        = string
  default     = "kind-demo"
}
// Run "kubectl config current-context"
variable "kubernetes_cluster_context_name" {
  description = "Name of the Kubernetes cluster context used to connect to the API"
  type        = string
  default     = "kind-demo"
}

This file includes the name assigned to the kind cluster and the current kubernetes cluster context. None of these pieces will need to be edited. The second file (~/panoptica/terraform/main.tf) will do the deployment of the Panoptica binaries to the cluster with specific settings, including

main.tf is given below.

terraform {
  required_providers {
    securecn = {
      source = "Portshift/securecn"
      version = ">= 1.1.0"
    }
  }
}

// Configure Panoptica provider with keys
provider "securecn" {
 access_key = var.access_key
 secret_key = var.secret_key
}

// Provision K8s cluster in Panoptica
resource "securecn_k8s_cluster" "cluster" {
  kubernetes_cluster_context = var.kubernetes_cluster_context_name
  name = var.environment_name
  ci_image_validation = false
  cd_pod_template = false
  istio_already_installed = false
  connections_control = true
  multi_cluster_communication_support = false
  inspect_incoming_cluster_connections = false
  fail_close = false
  persistent_storage = false
  api_intelligence_dast = true
  install_tracing_support = true
  token_injection = true
}

There is one additional Terrafor file which we will analyze in the next step. In order to deploy Panoptica to your cluster, perform the following commands (include moving the environment-for-sock-shop.tf)

cd ~/panoptica/terraform/
mv environment-for-sock-shop.tf ../environment-for-sock-shop.tf
terraform init
terraform apply -auto-approve

Panoptica installation with Terraform

Check Panoptica dashboard

On the Panoptica dashboard, check the Cluster is added to the list of existing clusters.

Verifying cluster

Check the related environment is added to the list of existing environments.

Verifying environment

Application background and information

To learn the API security capabilities of Panoptica we will deploy the application sock-shop.

Sock Shop webpage

It’s based on a micro-service architecture. Each micro-service communicates via REST APIs. Users can orders socks via the UI or REST APIs.

Panoptica visibility into Sock Shop

This version of the application is capable to make orders and consume external APIs on api-m.sandbox.paypal.com

E-Commerce in Sock Shop

Paypal developer sandbox

Create a labeled/protected namespace; prepare Panoptica environment

You can deploy the controller on one or more namespaces in the cluster. Namespaces that have a controller deployed will be protected by Panoptica. We’ll begin by creating a namespace (a collection of resources within the cluster) and then labeling that namespace to indicate that it should be protected by Panoptica.

kubectl create namespace sock-shop
kubectl label namespace sock-shop SecureApplication-protected=full --overwrite

We’ll then need to move the environment-for-sock-shop.tf file back into the terraform directory and apply it. This terraform HCL file is given below:

resource "securecn_environment" "sock-shop-testing" {

  name = "kind-demo-sock-shop-testing"
  description = "testing environment"

  kubernetes_environment {
    cluster_name = securecn_k8s_cluster.cluster.name
    namespaces_by_names = ["sock-shop"]
  }
}

We’ll apply this configuration using the following commands

cd ~/panoptica/terraform/
mv ../environment-for-sock-shop.tf ./environment-for-sock-shop.tf
terraform apply -auto-approve

Get Access to APIClarity dashboard

While not required, if we wish to use the APIClarity dashboard for troubleshooting, we need to expose this service (which is part of Panoptica) externally from the cluster. This will require applying a NodePort configuration to the APIClarity port within the cluster, exposing that outside of the cluster, and then proxying it within the local kind system.

The NodePort definition file can be found in the ~/panoptica/lab/apiclarity-nodeport.yaml file and is defined below.

apiVersion: v1
kind: Service
metadata:
  name: apiclarity-nodeport
  labels:
    app: apiclarity-nodeport
  namespace: portshift
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 30002
  selector:
    app: apiclarity

We can then use kubectl to deploy the definition to the running cluster.

cd ~/panoptica/lab/
kubectl apply -f apiclarity-nodeport.yaml

Finally, we’ll expose the NodePort service outside of the lab environment on port 8081 using Caddy (a reverse proxy system)

caddy reverse-proxy --from :8081 --to 172.18.0.4:30002 > /dev/null 2>&1 &

You can get access to the APIClarity dashboard by accessing the FQDN of the AWS EC2 instance via HTTP on port 8081 (for example http://ec2-44-200-168-73.compute-1.amazonaws.com:8081, which is the same FQDN that was output from the Terraform application and the same as what was used to initiate your SSH session). In case it doesn’t work first attempt, give a little bit of time (~30s) and reload the page.

Deploy the Sock Shop Application

In order to deploy the SockShop application, we’ll need to apply the configuration manifest. It is too large to include inline, but can be found at ~/panoptica/lab/sock-shop.yaml. We will deploy the application to the previously created protected namespace called sock-shop.

kubectl apply -f ~/panoptica/lab/sock-shop.yaml --namespace=sock-shop

You should have a similar output:

ubuntu@ip-172-31-95-18:$ kubectl apply -f ~/panoptica/lab/sock-shop.yaml --namespace=sock-shop
namespace/sock-shop created
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
deployment.apps/carts created
service/carts created
deployment.apps/carts-db created
service/carts-db created
deployment.apps/catalogue created
service/catalogue created
deployment.apps/catalogue-db created
service/catalogue-db created
deployment.apps/front-end created
service/front-end created
deployment.apps/orders created
service/orders created
deployment.apps/orders-db created
service/orders-db created
service/payment created
deployment.apps/payment created
deployment.apps/payment-db created
service/payment-db created
deployment.apps/queue-master created
service/queue-master created
deployment.apps/rabbitmq created
service/rabbitmq created
deployment.apps/session-db created
service/session-db created
deployment.apps/shipping created
service/shipping created
service/user created
deployment.apps/user created
service/user-db created
deployment.apps/user-db created

Because the SockShop application already has a NodePort service exposing the webUI of the app outside of the cluster, we simply need to proxy this on our local host outside of the system. We can accomplish this by using the following command with caddy:

caddy reverse-proxy --from :8080 --to 172.18.0.4:30001 > /dev/null 2>&1 &

We’ll want to ensure that all pods within the sock-shop namespace are deployed and ready. We can verify this using the following command:

watch kubectl get pods -n sock-shop

You should have an output similar to when application is ready:

Every 2.0s: kubectl get pods -n sock-shop

NAME                            READY   STATUS             RESTARTS       AGE
carts-85f4d4b45-sj92n           3/3     Running   0             88s
carts-db-59578c5464-ckwtn       3/3     Running   0             88s
catalogue-bbdcf5467-2v4ks       3/3     Running   0             88s
catalogue-db-6bf8d6ff8f-ph62m   3/3     Running   0             88s
front-end-7b4dfc5669-6xsbg      3/3     Running   0             88s
orders-7f57779c86-ss6m2         3/3     Running   0             88s
orders-db-848c4c6db4-rtjnk      3/3     Running   0             88s
payment-cdf9694b4-gf6zx         3/3     Running   0             88s
payment-db-55cf8bcffb-ncfvx     3/3     Running   0             88s
queue-master-96b8bb948-mnp4p    3/3     Running   0             88s
rabbitmq-f5b858486-v5cwx        4/4     Running   0             87s
session-db-7b8844d8d5-xbxqr     3/3     Running   0             87s
shipping-797475685c-lzv5v       3/3     Running   0             87s
user-7c48486d5-244vz            3/3     Running   1 (75s ago)   87s
user-db-767cf48587-jj7zg        3/3     Running   0             87s

Enter CTL^C to exit the command.

Sock shop deployment

You can then check the access to the SockShop webUI by accessing the FQDN of the AWS instance on port 8080 (for example http://ec2-44-200-168-73.compute-1.amazonaws.com:8080). It should look the same as the top image in on this page.

Finally, we’ll want to deploy a sample load on the application that will simulate users browsing and ordering products within the app. We’ll start by injecting a Python script into a configmap. This step has already been performed and saved, but is included for reference.

kubectl create configmap --dry-run=client user-traffic-load-configmap --from-file=~/panoptica/lab/user-traffic-load.py --output yaml | tee ~/panoptica/lab/user-traffic-load-configmap.yaml

Again, this step has already been done, and has been included in the user-traffic-load-configmap.yaml file within the ~/panoptica/lab/ directory. Next, we’ll need to apply this configmap to the cluster and the appropriate namespace.

kubectl apply -f ~/panoptica/lab/user-traffic-load-configmap.yaml --namespace=sock-shop

Finally, we’ll need to deploy a pod (container) under the sock-shop namespace which will consume the configmap and simulate user traffic.

kubectl apply -f ~/panoptica/lab/user-traffic-load-deployment.yaml --namespace=sock-shop

User load configmap

Cisco Panoptica can securely manage tokens that you use to access your API sites. It stores them securely in a vault, and securely injects them into your workloads (as environment variables) when they are deployed in clusters.

Panoptica management of API tokens has these advantages:

In order to accomplish this, when you create a cluster, set the option API token injection to “yes”. In the main.tf we used to deploy Panoptica, the attribute token_injection = true was included when specifying the securecn_k8s_cluster resource.

Add a Token and Create a Token Injection Policy

Edit API token

Click on TOKEN INJECTION button, this opens a new Deployment Policy rule for Token Injection. The tokens in the rule will be injected into the workloads selected by the rule, as environment variables. Complete the following steps:

Step 1 of API injection rule

Step 2 of API injection rule

Step 3 of API injection rule

Step 4 of API injection rule

Don’t forget to apply the new policy:

Applying API injection policy

You can verify your configuration under Policies page and Deployment Rules tab, the paypal rules should be listed:

Verifying API injection policy

Add a secret to the vault and verify injection

Run the following script to get the API_ACCESS_TOKEN from Paypal API and add it as a secret in the bank-vaults:

cd ~/panoptica/lab/
chmod +x execute.sh
chmod +x clean.sh
./execute.sh

The script will also restart the Pods to allow for token injection. You should have an output similar to:

ubuntu@ip-172-31-95-18:~$ cd ~/panoptica/lab/
ubuntu@ip-172-31-95-18:~/panoptica/lab$ chmod +x execute.sh
ubuntu@ip-172-31-95-18:~/panoptica/lab$ chmod +x clean.sh
ubuntu@ip-172-31-95-18:~/panoptica/lab$ ./execute.sh
Success! Data deleted (if it existed) at: secret/paypal
Paypal access token: A21AAL-1ejtro28QI0Jqcgiq1iKpOippNr4zBJ9_UnO4W2OrCSpRlT3UzEfDxZhfcWfZRFFcKphKy0rMVLZiMWHjQLdHWiEuQ
Key              Value
---              -----
created_time     2022-12-14T13:21:23.270567663Z
deletion_time    n/a
destroyed        false
version          1
Waiting for the policy to be effective ...
Restartng the relevant pod to allow for injection
pod "payment-cdf9694b4-n2r7t" deleted
pod "payment-db-55cf8bcffb-s7fpj" deleted
Waiting for user pod to be ready ...
User pod is now running.
payment-cdf9694b4-tds52
payment-db-55cf8bcffb-hltqp

Open the payment description and search for the injected vault

kubectl describe pod -n sock-shop -l app=payment | grep vault:/secret/data/paypal#paypaltoken

Expected output:

ubuntu@ip-172-31-95-18:~/panoptica/lab$ kubectl describe pod -n sock-shop -l app=payment | grep vault:/secret/data/paypal#paypaltoken
      PAYPAL_ACCESS_TOKEN:           vault:/secret/data/paypal#paypaltoken

Token injection pod reset

Once the API token is successfully injected, the vulnerable application is completely functioning: it offers a catalogue, a cart, a payment and checkout functionality.

Verify in the navigator (of the Panoptica dashboard) all application services are known to Panoptica. Use Expand all button to observe the full mesh of application services.

Viewing external services in Panoptica dashboard

The EXTERNAL APIS tab under APIs page shows external APIs consumed by workloads in your cluster.

External API visibility

Click on an API to show more details.

External API details

In this section we will use Panoptica to do a static analysis of the application’s APIs based on its OpenAPI specification. API specifications can be either provided by the user or reconstructed by Panoptica.

Panoptica builds a schema based on the traffic. In a previous section we loaded an application with simulated user traffic. This will be enough for Panoptica to reconstruct a part of the application’s OpenAPI specification.

Analysis of Discovered Internal APIs

Take a look at the discovered internal APIs. Each microservice uses APIs to communicate. We will focus on only one - catalogue. The same steps can repeated for the rest of the microservices.

Discovered internal APIs

Reconstructing and Viewing an OpenAPI Specification

Open the catalogue microservice from the API inventory. In the specs tab we have two available options:

We will choose the latter one.

Catalogue APIs

The data collection duration depends on the user traffic bandwidth. In our case 2 minutes is enough. Ensure that you specify the cluster to collect the data from.

Collecting catalogue APIs

The data collection will last 2 minutes.

Collection in process

After the data collection, you can review the reconstructed spec.

Prompt to review the specification

Two endpoints are discovered:

Check both and approve.

Reviewing the recreated API spec

Specify the latest OAS version - V3.

Approve the reviewed spec

After the review approval, Panoptica builds an OpenAPI schema.

Building the API spec

After the spec build, you can always analyze the swagger file. Under the specs tab, you will find an option to See on swagger.

Reading the reconstructed spec

The Swagger will open in a separate page.

Viewing the reconstructed Swagger

Risk Analysis Based on OpenAPI Schema

Once the catalogue API schema is available, Panoptica automatically analyzes it. The results are available in the UI.

API risk analysis

Under the risk findings you will find a new group called api-specification. Any schema-related risks are listed here. Take a look at the findings. Some require actions, others are related to the fact that the reconstructed schema is light.

API risk findings

NOTE: A reconstructed schema can be completed by increasing the data collection period or by manually injecting additional information.

The final step is to ensure that we destroy the EC2 instance so that we are not charged for unintentional usage. This is done by exiting the SSH session to the instance and performing a destroy action with Terraform.

exit
terraform destroy -auto-approve

Destroying the EC2 environment

Congratulations! You have successfully completed the tutorial exploring Panoptica and the security features it provides for your Kubernetes environment!

Learn More