If you are using TKGm deployment, got multiple management clusters and want to manage them using single node, this article is for you. In many cases, what we do is, we deploy different TKGm management and workload clusters from different node. But when it comes to manage all of them, it always make sense to manage them from one node and this will ease the life of administrators.
So, I will talk about how to add a TKG management cluster deployed using different node to a node from where you want them to mange.
Pre-requirements for adding a management cluster
- Tanzu CLI is installed
- Tanzu CLI plugins are installed
Adding Management cluster on a different node
1. Copy the TKG management cluster config file from a node where it was deployed. Location of the config file is ~/.kube-tkg/config
$ scp ~/.kube-tkg/config firstname.lastname@example.org:/tmp/ config 100% 16KB 15.2MB/s 00:00
2. Login to a node where you want to add a management cluster
3. Run “tanzu login” command
$ tanzu login ? Select a server + new server ? Select login type Local kubeconfig ? Enter path to kubeconfig (if any) /tmp/config ? Enter kube context to use demo-tkg-mgmt-admin@demo-tkg-mgmt ? Give the server a name demo-tkg-mgmt ✔ successfully logged in to management cluster using the kubeconfig demo-tkg-mgmt
- There are two ways to add a new management cluster, either using API endpoint or using kubeconfig file. I have used kubeconfig file method but you can use any.
- Name of the cluster context and server name can be found in config file or you can also run “kubectl config get-contexts” command on a working node and it will show you the details.
After, tkg management cluster is added successfully. You can run tanzu commands to validate.
$ tanzu management-cluster get NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES demo-tkg-mgmt tkg-system running 3/3 1/1 v1.20.5+vmware.1 management $ tanzu cluster list NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN demo-tkg-1 default running 3/3 3/3 v1.20.5+vmware.1 tanzu-services prod
If you want to use kubectl commands, you can set the kubeconfig file, run below command.
$ export KUBECONFIG=/tmp/config $ kubectl get po -n default NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-cq9kw 1/1 Running 0 10s
That’s all. So you are now set to manage the clusters from new node. If you want to manage workload clusters well, you can simply get the config file and use it from the new node.