https://github.com/GoogleCloudPlatform/terraformer/releases/)
Note: The instructions for this tutorial are written using a macOS system, but should function on any POSIX-based shell system (Linux, WSL, etc). Additional changes may be required for those using the native Windows runtime.
Prior to using the tools, they must be installed. This process will vary based on operating system in use.
Installation instructions for Terraformer are given here. The preferred method of installation is using a package manager for your OS, such as Homebrew for macOS/Linux, MacPorts for macOS, or Chocolatey for Windows.
brew install terraformer
sudo port install terraformer
choco install terraformer
It is also possible to download the precompiled release of Terraformer. Each “release” has several versions of Terraformer compiled with the required Terraform provider included (i.e. separate versions for AWS, Azure, GCP, etc). For further instructions for your platform, view the installation instructions given above.
The installation for Azure Export for Terraform (aztfexport) can be a bit more involved. For macOS users, it can be accomplished through Homebrew. Windows users can use winget
or precompiled binaries. There are additional requirements for Linux package managers (such as apt
or dnf
) to add specific repositories. It is also possible to compile directly from source using go get
.
For my installation, I will be using Homebrew: brew install aztfexport
.
The full installation instructions for Azure Export for Terraform are available here
While not directly used, when using Terraformer without the built-in providers, you need to provide a provider.tf
file (in its own folder, which will be the same folder from which the Terraformer commands are run) that includes the appropriate provider argument block to define the end “cloud” or “target” to extract the configuration from. This provider will then need to be initialized using Terraform’s init
action. The provider.tf
file will look like this, as we will be using GCP to pull our Terraform configuration from.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.31.0"
}
}
}
In order to initialize the provider, we’ll need to install Terraform. Full instructions based on OS are given here. For macOS users, brew install terraform
should suffice.
Once you are in the project folder containing the provider.tf
file, you can initialize it as in the following
[I] terraformer-stuffs » cd gcp
[I] gcp » ll
Permissions Size User Group Date Modified Name
.rw-r--r--@ 8.2k qsnyder staff 25 Mar 15:49 .DS_Store
.rw-r--r--@ 125 qsnyder staff 18 Mar 12:57 provider.tf
[I] gcp » cat provider.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.31.0"
}
}
}
[I] gcp » terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/google versions matching "~> 4.31.0"...
- Installing hashicorp/google v4.31.0...
- Installed hashicorp/google v4.31.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Once this is complete, you are ready to import configuration, using Terraformer, however, we need to create our cloud environments from which to build our Terraform configuration.
The Google Cloud SDK is required to authenticate and interact with Google Cloud Platform. This will be used to create a cached login credential that can then be called by Terraformer to gather the running resources from your GCP instance. The SDK can be installed using Homebrew on macOS, brew install google-cloud-sdk
, or by downloading the installer from the Google Cloud SDK page.
The Azure CLI is required to authenticate and interact with Azure from the command line. Like the google-cloud-sdk
, this is mostly used to generate a cached login for your Azure account that can be used by Azure Export for Terraform. The CLI can be installed using Homebrew on macOS, brew install azure-cli
, or by downloading the installer from the Azure CLI page.
While tooling to extract the configuration is important, equally important is having some configuration to pull from in order to test the tooling. While it is entirely possible to create a bespoke environment on your own, this tutorial will focus on exporting configurations that have already been generated using “click-ops” within each cloud.
Because we’re (mostly) networking professionals, the two environments that exist are using previously created tutorials for the Enterprise Cloud Connectivity certification. Both of these environments enable VPN connectivity between an on-prem router and a cloud environment; one in Azure and one in Google Cloud Platform (GCP).
These tutorials can be found by opening a browser to Cisco U. and searching the tutorials for ENCC (https://u.cisco.com/search/tutorial?query=ENCC), or by directly accessing the following links:
It is not necessary to complete the entire tutorial for each cloud, as we cannot use Terraform to configure the on-prem router to establish the VPN to the cloud. However, the tutorials (or any other environment) will need to have the basics set up; the VPC/Vnet, the cloud VPN gateway, the connection, and any other compute or network resources that you wish to create.
Note: It is possible to use Terraformer within an Azure deployment. However, for the sake of completeness, this tutorial will focus on the use of both tools, each in unique clouds to showcase the prerequisite and operational differences.
Being able to manage existing infrastructure with Terraform isn’t just about crafting configuration in HCL (using how-to guides, Terraform provider documentation, etc). If configuration exists within a target environment and a Terraform configuration is crafted to affect change on that preexisting environment, it will fail. Terraform also considers state and what it specifically knows about that environment when it attempts to apply configuration. If something exists and Terraform does not know about what exists, it will fail.
We can observe this in the *.tfstate
files that are created when a Terraform apply
action is executed. These files include a JSON representation of the resources that Terraform has created or is managing. If the state file is lost, or if the state file is not updated with changes that are made outside of Terraform, Terraform will not be able to manage the resources.
That creates a challenge when we want to manage existing infrastructure with Terraform. We need to be able to import the existing resources into Terraform so that Terraform can manage them. terraform import
can be used, but there are challenges around using it; namely that only a single resource can be imported at a time and you still need to know a priori what the resource schema is within Terraform. This is why tools like Terraformer or AZTFExport exist; they can extract the configuration and state of existing resources so that they can be managed with Terraform.
Note: This step should only be completed after the environment within Google Cloud Platform has been created
Once you have completed building the cloud VPC, VPN gateway, and connection within GCP, we’re now ready to extract the configuration in Terraform, and more importantly, the state.
If using a version of Terraformer without the provider included, you’ll need to ensure that you follow the steps outlined in the previous steps, wherein you create a Terraform file (such as provider.tf
) that includes the provider declaration block only; no other configuration is required. For completeness, this provider.tf
file should look like this:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.31.0"
}
}
}
You’ll also need to execute the terraform init
command to download the provider into the project folder.
Once you have the provider in place, you will also need to ensure that you are logged in to your Google Cloud Account and that API access is enabled. You can do this by running the following command:
gcloud auth application-default login
This will open a browser window and prompt you to log in to your Google Cloud account. Once you have logged in, you will need to navigate to the API console within GCP and enable API access for the project and services that you are working with.
Once logged in and in the project folder with the Terraform provider initialized, you can now run the Terraformer command to extract the configuration and state of the resources within the GCP project. The Terraformer application has a syntax that can be gleaned through the use of the --help
flag. Additionally, there is a plan
action, as well as an import
action, both of which will be demonstrated.
Starting with the plan
action, the syntax used will be something similar to below:
terraformer plan google --resources=[list-of-resources] --projects=[project-id-from-gcp] --regions=[region-of-project] -C
The resources list can be found in the Terraformer help context by using the list
command. However, you must attach a project ID in the command string, as below
> terraformer import google list --projects [project-id-from-gcp]
addresses
autoscalers
backendBuckets
backendServices
bigQuery
cloudFunctions
cloudbuild
cloudsql
cloudtasks
dataProc
disks
dns
externalVpnGateways
firewall
forwardingRules
gcs
gke
globalAddresses
globalForwardingRules
healthChecks
httpHealthChecks
httpsHealthChecks
iam
images
instanceGroupManagers
instanceGroups
instanceTemplates
instances
interconnectAttachments
kms
logging
memoryStore
monitoring
networkEndpointGroups
networks
nodeGroups
nodeTemplates
packetMirrorings
project
pubsub
regionAutoscalers
regionBackendServices
regionDisks
regionHealthChecks
regionInstanceGroupManagers
regionInstanceGroups
regionSslCertificates
regionTargetHttpProxies
regionTargetHttpsProxies
regionUrlMaps
reservations
resourcePolicies
routers
routes
schedulerJobs
securityPolicies
sslCertificates
sslPolicies
subnetworks
targetHttpProxies
targetHttpsProxies
targetInstances
targetPools
targetSslProxies
targetTcpProxies
targetVpnGateways
urlMaps
vpnTunnels
Using this list, we can create a command to plan
the import of resources used within the VPN created in a previous step.
terraformer plan google -r networks,vpnTunnels,externalVpnGateways,routers,routes,targetVpnGateways,firewall,securityPolicies -z us-west2 --projects gcp-landcgcpsandbox-nprd-46968 -C
You’ll need to replace gcp-landcgcpsandbox-nprd-46968
with the project ID from your GCP project. This command will output a Terraform plan that can be used to import the resources into Terraform.
Note: The project ID value referenced is not the human-readable project name, but the unique identifier for the project within GCP. This can be found in the GCP console main landing page and copied using the button directly to the right of the value.
After running the plan, you’ll see that Terraformer will iterate through the resources defined in the command, collect them, and build current states of those resources. It will then be output as a plan.json
file inside of a directory structure that includes the project ID and region. This file is quite large, so the output is skipped for brevity in this tutorial.
Note: Had the
-C
flag not been used, each resource would have had output in a separate folder. Using the-C
flag createa a “compact” output, defined within a single file for both the plan and import actions.
The plan.json
file does not actually include any of the generated HCL, just a current snapshot of state. In order to generate the HCL, we must use the import
action. We can optionally use the -o
flag to specify the output directory for the generated HCL files. The final addition will be to modify the path output pattern to include all importer resources into a single HCL file, rather than a per-resource directory structure.
terraformer import google -r networks,vpnTunnels,externalVpnGateways,routers,routes,targetVpnGateways,firewall,securityPolicies -z us-west2 --projects gcp-landcgcpsandbox-nprd-46968 -C -o gcp-hcl/ --path-pattern {output}/{provider}/
You’ll see output similar to the following
2024/04/04 12:55:31 google importing project gcp-landcgcpsandbox-nprd-46968 region us-west2
2024/04/04 12:55:34 google importing... networks
2024/04/04 12:55:34 google done importing networks
2024/04/04 12:55:35 google importing... vpnTunnels
2024/04/04 12:55:36 google done importing vpnTunnels
2024/04/04 12:55:36 google importing... externalVpnGateways
2024/04/04 12:55:37 google done importing externalVpnGateways
2024/04/04 12:55:37 google importing... routers
2024/04/04 12:55:38 google done importing routers
2024/04/04 12:55:38 google importing... routes
2024/04/04 12:55:39 google done importing routes
2024/04/04 12:55:39 google importing... targetVpnGateways
2024/04/04 12:55:40 google done importing targetVpnGateways
2024/04/04 12:55:40 google importing... firewall
2024/04/04 12:55:41 google done importing firewall
2024/04/04 12:55:41 google importing... securityPolicies
2024/04/04 12:55:42 google done importing securityPolicies
....
2024/04/04 12:55:43 Refreshing state... google_compute_route.tfer--default-route-936858ed430c8a92
2024/04/04 12:55:43 Refreshing state... google_compute_route.tfer--default-route-0f6bf6150e736229
2024/04/04 12:55:43 Refreshing state... google_compute_route.tfer--default-route-31d6706bd672bc66
2024/04/04 12:55:43 Filtered number of resources for service externalVpnGateways: 1
2024/04/04 12:55:43 Filtered number of resources for service routers: 1
2024/04/04 12:55:43 Filtered number of resources for service routes: 43
2024/04/04 12:55:43 Filtered number of resources for service targetVpnGateways: 0
2024/04/04 12:55:43 Filtered number of resources for service firewall: 5
2024/04/04 12:55:43 Filtered number of resources for service securityPolicies: 0
2024/04/04 12:55:43 Filtered number of resources for service networks: 1
2024/04/04 12:55:43 Filtered number of resources for service vpnTunnels: 1
2024/04/04 12:55:43 google Connecting....
2024/04/04 12:55:43 google save
2024/04/04 12:55:43 google save tfstate
When completed, Terraformer will have created a folder under gcp-hcl/
called google
(which is the provider name, aligning to the pattern we changed in the Terraformer command), which contains the generated HCL files.
> ls -l gcp-hcl/google/
.rwxr-xr-x 8.4k qsnyder 4 Apr 12:55 outputs.tf
.rwxr-xr-x 152 qsnyder 4 Apr 12:55 provider.tf
.rwxr-xr-x 22k qsnyder 4 Apr 12:55 resources.tf
.rwxr-xr-x 112k qsnyder 4 Apr 12:55 terraform.tfstate
.rwxr-xr-x 111 qsnyder 4 Apr 12:55 variables.tf
The resources.tf
file contains the HCL for the resources that were imported, which is the main focus of this tutorial. However, outputs and state were also generated as part of the import action. This allows us to understand what the click-based operations within the GCP console look like in HCL, and how we can manage them with Terraform.
Note: Depending on resource coverage within Terraformer, you may need to manually adjust the HCL to ensure that all resources are managed. This is a common issue with Terraformer, as it does not always cover all resources within a given provider. Additionally, values may not populate correctly as there is no Terraformer coverage for those values. In our output, these will be indicated with URLs as a placeholder.
In order to manage the environment with Terraformer, we must move into the gcp-hcl/google
folder and execute this command:
terraform state replace-provider \
-auto-approve \
"registry.terraform.io/-/google" \
"hashicorp/google"
as this updates the state schema to align with newer versions of Terraform. Terraformer uses Terraform 0.12 syntax, which is not compatible with newer versions, so this command must be executed. At this point, terraform plan -refresh-only
could be executed and the state would be compared with what exists and the infrastructure could be managed by modifying the generated HCL.
Note: If using the VPN tutorial above, the
refresh
will fail, as there are incomplete exported values from Terraformer, including next hop values, shared secrets (due to security, these aren’t exported), etc. These values will need to be manually added to the HCL in order to manage the current state of our VPN environment within GCP. See below for the image
When dealing specifically with Azure infrastructure, we can use another tool for export that is written and developed by Microsoft, called Azure Export for Terraform. This isn’t to say that you can’t use Terraformer for Azure, but the Azure Export for Terraform tool is specifically designed to work with Azure resources and is a bit more user-friendly to operate.
Similar to Terraformer, we need to set up the environment for AZTFExport. This is done by installing the Azure CLI (done in a previous step) and logging in to your Azure account. This is done by running the following command:
az login --scope https://graph.microsoft.com//.default
after which a web browser will open and ask you to log in to your Microsoft account. The cached login credentials will be valid for ~2 weeks, at which point, if you attempt to run AZTFExport again, you will be prompted to log in again.
The only other requirement for AZTFExport is that the current working directory where the command will be run must be empty, and the application will either not run or prompt to ask if you wish to overwrite existing files, depending on which mode of execution is used.
The easiest way to run AZTFExport is interactively. You can do this by executing the aztfexport
command to capture the configuration of an entire Resource Group (RG) within your Azure environment.
aztfexport resource-group --parallelism 2 [ResourceGroupName]
Note: You will need to replace the
[ResourceGroupName]
with the name of the Resource Group within your Azure environment. Additionally, the--parallelism
flag is optional and can be set to any number of parallel threads that you wish to use to export the configuration. The default is 10, which can cause timeout issues when attempting to export. Setting this value to 2, while slower, will ensure that the export completes successfully.
Once this command is invoked, AZTFExport will begin to connect to Azure, understand the configured resources, and then present a menu of all resources that could be imported.
Once the menu appears, you can use the onscreen menu to remove resources, filter resources, or export them. The commands for doing so will appear at the bottom of the menu window. Once the desired options are selected, the configuration can be imported by pressing the w
key. This will generate the Terraform HCL files and tfstate
of the running environment.
This will take some time, based on the lower number of parallel threads; the GIF above is sped up to 5x normal speed.
Once exported, you will be prompted to return to the working directory where the command was run. The directory listing should look similar to the following:
aztfexport-demo » tree -a
.
├── .terraform
│ ├── providers
│ │ └── registry.terraform.io
│ │ └── hashicorp
│ │ └── azurerm
│ │ └── 3.77.0
│ │ └── darwin_amd64
│ │ └── terraform-provider-azurerm_v3.77.0_x5
│ └── terraform.tfstate
├── .terraform.lock.hcl
├── aztfexportResourceMapping.json
├── import.tf
├── main.tf
├── provider.tf
├── terraform.tf
└── terraform.tfstate
8 directories, 9 files
From here, we can execute terraform plan -refresh-only
and see that the saved state matches the current running state of the Azure resources.
[I] aztfexport-demo » terraform plan -refresh-only
azurerm_local_network_gateway.res-2: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/localNetworkGateways/encc_ipsec_vpn_peer]
azurerm_network_watcher.res-3: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westus2]
azurerm_public_ip.res-4: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/publicIPAddresses/encc_ipsec_vpn_ip]
azurerm_virtual_network.res-6: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/virtualNetworks/encc-ciscou-demo]
azurerm_resource_group.res-0: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG]
azurerm_subnet.res-7: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/virtualNetworks/encc-ciscou-demo/subnets/GatewaySubnet]
azurerm_subnet.res-8: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/virtualNetworks/encc-ciscou-demo/subnets/default]
azurerm_virtual_network_gateway.res-5: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/virtualNetworkGateways/encc_ipsec_vpn_gateway]
azurerm_virtual_network_gateway_connection.res-1: Refreshing state... [id=/subscriptions/4813bc9d-1d56-4737-908c-a7824129a4ca/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/connections/encc_ipsec_vpn_connection]
No changes. Your infrastructure still matches the configuration.
Terraform has checked that the real remote objects still match the result of your most recent changes, and
found no differences.
───────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save thactions if you run "terraform apply" now.
This indicates that our environment directly matches the state of what exists within our Terraform directory.
If you wish to run AZTFExport in a non-interactive mode, you can do so by specifying the --non-interactive
flag. This will export the configuration of the specified Resource Group without having to use the interactive menu.
aztfexport resource-group --non-interactive --parallelism 2 [ResourceGroupName]
This will output the same files as the interactive mode and the same tests can be done on the resulting Terraform configuration as before.
Another feature of AZTFExport is the ability to export only the HCL files and not the tfstate
file. This is done by specifying the --hcl-only
flag when running the command. This may be useful to generate the HCL files for a configuration that will be used in a different environment, rather than being used to manage an existing configuration.
aztfexport resource-group --hcl-only --parallelism 2 [ResourceGroupName]
This command can be run in either interactive or non-interactive mode.
Each of these tools has their pros and cons, and neither provides 100% complete export of existing configuration within the cloud provider. However, as a tool to import large chunks of state as well as provide skeleton HCL configurations for Terraform, they are invaluable.
As was noted earlier, Terraformer can be used across a variety of infrastructure providers, while AZTFExport is specific to Azure. This makes Terraformer a more versatile tool, but also means that it may not be as fine-tuned to the specific nuances of a given cloud provider. AZTFExport, on the other hand, is specifically designed to work with Azure and is more likely to be up-to-date with the latest features and changes within the Azure environment.
It is highly suggested that you explore various cloud configurations and providers with Terraformer, as well as validating configurations and services with AZTFExport to understand the nuances of each tool and what does and does not export fully.
Documentation for Terraformer can be found on the Terraformer project Github page here
Documentation for Azure Export for Terraform can be found on the AZTFExport project Github page here. Additional information can also be found on the AZTFExport Github page here.
Microsoft also has documentation and getting started pages for Terraform Export for Azure:
Congratulations!
You have successfully completed this tutorial. Through this exercise, you have gained additional understanding of Terraform state and how third-party tools can be leveraged to generate Terraform HCL and state from existing configurations.
Why Create a Free Cisco U. Account?
A Cisco U. account helps you:
Personalize Training: Set your learning goals and pace.
Track Progress: Monitor your achievements and learning milestones.
Resume Anytime: Continue your learning exactly where you stopped.
Cisco U. Demo: Step-by-Step Guide to Learning Paths, Courses, Labs, and Tutorials
Explore More on Cisco U:
To ask questions and share ideas, join our Cisco Learning Community.
For technical issues, feedback, or more resources, visit our Cisco U. Support page.
Don’t forget to click “Exit Tutorial” to log your completed content