
VMware announced Terraform provider for TMC that will help managing different operations in an IaC manner. In this blog post, I will try to cover some basic operations like Cluster Group, Cluster, namespace etc creation using terraform.
Pre-requirements
- terraform is installed, v 0.15 and later
- VMware Cloud API token with an appropriate TMC permissions
- TMC Endpoint url
Using Terraform Provider for running TMC Tasks
- Create a directory where you will place terraform template e.g.
tmc-terraform
- Create a terraform template e.g.
main.tf
Now, it’s time to initialize the terraform module. To do that, put the below content in a main.tf
file and update TMC endpoint url and token.
terraform {
required_providers {
tanzu-mission-control = {
source = "vmware/tanzu-mission-control"
version = "1.0.1" # it's the provider version and you can change it as version changes
}
}
}
variable "endpoint" {
type = string
description = "TMC endpoint"
default = "<put your TMC endpoint url>"
}
variable "vmw_cloud_api_token" {
type = string
description = "TMC API Token"
default = "<put your token here>"
}
provider "tanzu-mission-control" {
endpoint = var.endpoint # optionally use TMC_ENDPOINT env var
vmw_cloud_api_token = var.vmw_cloud_api_token # optionally use VMW_CLOUD_API_TOKEN env var
# if you are using dev or different csp endpoint, change the default value below
# for production environments the csp_endpoint is console.cloud.vmware.com
# vmw_cloud_api_endpoint = "console.cloud.vmware.com" or optionally use VMW_CLOUD_ENDPOINT env var
}
- Run the
terraform init
after saving the above content in the file
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of vmware/tanzu-mission-control from the dependency lock file
- Using previously-installed vmware/tanzu-mission-control v1.0.1
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Create cluster group
- Add the below code block to your terraform template and update the values if required e.g. name, labels etc.
# Create cluster group with minimal information
resource "tanzu-mission-control_cluster_group" "create_cluster_group_min_info" {
name = "dt-cluster-group-using-tf-for-vsphere"
meta {
description = "cluster group created through terraform"
labels = {
"createdby" : "dt",
"usedfor" : "tkg-on-vsphere"
}
}
}
- Execute the following command
$ terraform plan
$ terraform apply
Create Tanzu Kubernetes Grid Service workload cluster
- Add the below code block to your main terraform template and update the values as per your environment
resource "tanzu-mission-control_cluster" "create_tkgs_workload" {
management_cluster_name = "dt-supervisor-cluster01"
provisioner_name = "dttkgs"
name = "tkgs-workload-1"
meta {
labels = { "clustertype" : "tkc" }
}
spec {
cluster_group = "dt-vsphere-cg"
tkg_service_vsphere {
settings {
network {
pods {
cidr_blocks = [
"172.20.0.0/16", # pods cidr block by default has the value `172.20.0.0/16`
]
}
services {
cidr_blocks = [
"10.96.0.0/16", # services cidr block by default has the value `10.96.0.0/16`
]
}
}
}
distribution {
version = "v1.21.2+vmware.1-tkg.1.ee25d55"
}
topology {
control_plane {
class = "best-effort-xsmall"
storage_class = "tanzu"
# storage class is either `wcpglobal-storage-profile` or `gc-storage-profile`
high_availability = false
}
node_pools {
spec {
worker_node_count = "1"
cloud_label = {
"nodepool" : "worker"
}
node_label = {
"key2" : "val2"
}
tkg_service_vsphere {
class = "best-effort-xsmall"
storage_class = "tanzu"
# storage class is either `wcpglobal-storage-profile` or `gc-storage-profile`
}
}
info {
name = "default-nodepool" # default node pool name `default-nodepool`
description = "tkgs workload nodepool"
}
}
}
}
}
ready_wait_timeout = "10m" # Default: waits until 3 min for the cluster to become ready
}
- Execute the following command
$ terraform plan
$ terraform apply
Create Tanzu Mission Control workspace
- Add the following code block to your main terraform template and update the labels, name etc.
resource "tanzu-mission-control_workspace" "create_workspace" {
name = "dt-tf-workspace"
meta {
description = "dt demo workspace"
labels = {
"workspacetype" : "demo",
"workspacename" : "dt-tf-workspace"
}
}
}
- Execute the following command
$ terraform plan
$ terraform apply
Create Tanzu Mission Control namespace with attached set as ‘true’
- Add the following code block to your main terraform template and update the labels, name, provisioner etc.
resource "tanzu-mission-control_namespace" "create_namespace_attached" {
name = "dt-tf-namespace" # Required
cluster_name = "tkgs-workload-1" # Required
provisioner_name = "dttkgs" # Default: attached
management_cluster_name = "dt-supervisor-cluster01" # Default: attached
meta {
description = "Create namespace through terraform"
labels = { "key" : "value" }
}
spec {
workspace_name = "dt-tf-workspace" # Default: default
attach = true
}
}
- Execute the following command
$ terraform plan
$ terraform apply
More information about terraform provider for TMC can be found in the below link
https://registry.terraform.io/providers/vmware/tanzu-mission-control/latest/docs
2 thoughts on “Managing Kubernetes clusters in Tanzu Mission Control (TMC) using Terraform provider”