Installing Cloud Native Runtime for Tanzu on Tanzu Kubernetes Cluster and deploying an application to test Serverless

Photo by ThisIsEngineering on Pexels.com

What is Cloud Native Runtime for Tanzu?

Cloud Native Runtimes for Tanzu is a serverless application runtime for Kubernetes that is based on Knative. Cloud Native Runtime Supports the following:

  • Scaling application to zero pod
  • Scaling application from Zero pod
  • Event-Triggered workloads

1. Installing Cloud Native Runtime for Tanzu on a Tanzu Kubernetes Cluster

In this post i will be talking about installing Cloud Native Runtime for Tanzu on a Tanzu Kubernetes Grid Cluster (TKC). Generally when TKC is setup, we also install Contour for Ingress management and hence i am considering that the coutour is setup.

Cloud Native Runtime can also be installed on a TKC cluster when Contour is not running and i will talk about this in later posts.

1.1 Installation Pre-requirements

  • Contour v1.14 : If you are on TKG 1.3.1, then it comes with Contour 1.12, So I will recommend to use TKG 1.4.
  • Dockerhub account : I will be using Dockerhub as container registry, but you can use any.
  • Kapp-controller v0.17.0 or later running on TKC. Since we are using TKG 1.4, it comes with version 0.23.
  • The following command line tools are required to be installed on a system from where you will trigger install command
    • kubectl (v1.18 or later)
      $ kubectl version
      Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2+vmware.1", GitCommit:"54e7e68e30dd3f9f7bb4f814c9d112f54f0fb273", GitTreeState:"clean", BuildDate:"2021-06-28T22:17:36Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
      Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2+vmware.1", GitCommit:"54e7e68e30dd3f9f7bb4f814c9d112f54f0fb273", GitTreeState:"clean", BuildDate:"2021-06-28T22:12:04Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
      
    • kapp (v0.34.0 or later)
      $ kapp version
      kapp version 0.37.0
      
      Succeeded
      
    • ytt (v0.30.0 or later)
       $ ytt version
       ytt version 0.34.0
      
    • kbld (v0.28.0 or later)
       $ kbld version
       kbld version 0.30.0
      
       Succeeded
      
    • kn
      $ kn version
      Version:      v20210925-62b1f739
      Build Date:   2021-09-25 10:41:49
      Git Revision: 62b1f739
      Supported APIs:
      * Serving
      - serving.knative.dev/v1 (knative-serving v0.26.0)
      * Eventing
      - sources.knative.dev/v1 (knative-eventing v0.26.0)
      - eventing.knative.dev/v1 (knative-eventing v0.26.0)
      

Note: If you need help in installing above components, Refer my blog here that contains the steps to install these tools. Ref: https://mappslearning.wordpress.com/2021/09/09/installing-tanzu-application-platform-tap-beta-on-an-aks-cluster/

  • Download the Cloud Native Runtimes 1.0.2+build.81 archive. This is the latest one at the time of writing this blog, you may see different version when new one is released.
  • Extract the contents
 # Extract the downloaded tar file
 $ tar -xvf cloud-native-runtimes-1.0.2.tgz
  cloud-native-runtimes/
  cloud-native-runtimes/.imgpkg/
  cloud-native-runtimes/.imgpkg/images.yml
  cloud-native-runtimes/README.md
  cloud-native-runtimes/VERSION.txt
  cloud-native-runtimes/bin/
  cloud-native-runtimes/bin/install.sh
  cloud-native-runtimes/config/
  cloud-native-runtimes/config/app.yaml
  cloud-native-runtimes/config/cnr-config.yaml
  cloud-native-runtimes/config/kapp.yaml
  cloud-native-runtimes/config/overlays/
  cloud-native-runtimes/config/overlays/100-values.star
  cloud-native-runtimes/config/overlays/100-values.yaml
  cloud-native-runtimes/config/overlays/kapp-controller-bundle-secret.yaml
  cloud-native-runtimes/config/overlays/serverless-values-secret-overlay.yaml
  cloud-native-runtimes/observability/
  cloud-native-runtimes/observability/wavefront/
  cloud-native-runtimes/observability/wavefront/app-operator-revision-view.json
  cloud-native-runtimes/observability/wavefront/app-operator-service-view.json

  • Download the lock file as well, lock file is available in same location as above
$ ls -l | grep -i lock
  -rw-r--r-- 1 root root  216 Sep 25 12:24 cloud-native-runtimes-1.0.2.lock

Steps to validate Contour Version installed on a TKC

  • Switch to an appropriate cluster context

    # Set kubectl alias
    $ alias k=kubectl
    
    # To check the current cluster context
    $ k config get-context
    
    # Switch to the cluster context
    $ k config use-context 
    
  • Run the below commands

    # Identify the Contour namespace
    $ k get ns | grep -i tanzu-system-ingress
    tanzu-system-ingress             Active   8d
    
    # See the Contour pod in above namespace
    $ k get po -n tanzu-system-ingress | grep -i contour
    NAME                       READY   STATUS    RESTARTS   AGE
    contour-754d6f69fd-fzc2s   1/1     Running   0          8d
    contour-754d6f69fd-gcvwc   1/1     Running   0          8d
    
    # Export the namespace and Deployment variable
    $ export CONTOUR_NAMESPACE=tanzu-system-ingress
    $ export CONTOUR_DEPLOYMENT=$(kubectl get deployment --namespace $CONTOUR_NAMESPACE --output name)
    
    # Get the contour image used
    $ k get $CONTOUR_DEPLOYMENT --namespace $CONTOUR_NAMESPACE --output jsonpath="{.spec.template.spec.containers[].image}"
    projects.registry.vmware.com/tkg/contour@sha256:152064c0a17ca80e4524fad80f28d7ec9ed707a63ff1b017d63dc84530b6aef6
    
    # Pull and Inspect the container image to see the version ( it is 1.17 > 1.14 and we are good to go)
    $ docker inspect projects.registry.vmware.com/tkg/contour@sha256:152064c0a17ca80e4524fad80f28d7ec9ed707a63ff1b017d63dc84530b6aef6 | grep -i image.version
    "LABEL org.opencontainers.image.version=v1.17.1+vmware.1"
    "org.opencontainers.image.version": "v1.17.1+vmware.1"
     "org.opencontainers.image.version": "v1.17.1+vmware.1"
    
    

    Note: You can also look at different component versions comes with TKG 1.4 here. Ref: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.4/rn/VMware-Tanzu-Kubernetes-Grid-14-Release-Notes.html

    1.2 Install Cloud Native Runtimes on a Cluster with Existing Contour Instances on a TKC

    • Login to Dockerhub
     $ docker login
      Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
      Username: dineshtripathi30
      Password:
      WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
      Configure a credential helper to remove this warning. See
      https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
      Login Succeeded
    
    • Relocate Image to a Registry
      # dineshtripathi30 is my dockerhub repository
      # --lock-output: you can give any name here.
      $ imgpkg copy --lock cloud-native-runtimes-1.0.2.lock --to-repo dineshtripathi30/cnr --lock-output ./relocated.lock
      copy | exporting 44 images...
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:007139a0c1becd8cb70f919a490f583ca27a6e1894266e80b109b7f0666ab46f
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:087fde5000e9753ba18416b61cf5e08252b1efcf25e5763c07d294e1cf780b5f
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:1c2b006f1b90858924fbe59fffddb9af7abf83ea28170d56b7eef54dc0c4e8cd
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:21d4949b1f03e5f505fa94bb23186eb72a3df2ef492cb6934e7681abff5259b9
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:39b83b79237a72df90aaf5a68a9ebed8a0627095b187a32ea16f9aea4e912407
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:5874763805fb8714c117179505d565644397ff589be8f56fd57b316a0b2e464a
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:5e648ca11a93d743007751ac5ae33c664dd69e12eddba18a753bef0039b18e5e
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:620e4ba295ff3159410056e14f751646dbb004224ea72ff60cd382b467a3ef38
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:6448a628fa7655ebfd600e071afe73d585bf40845bb822b17741fe8935a62096
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:67fb8ce04a96761cdb7c0af70d2d83304799b9b93c308d8fdd3399ce74e59a6f
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:736caa1504c01331d2ca55a51813ab3ba1dedb0103d8f4df79bd8bf3d2272517
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:748135423939fe869738213d9eba4908f91e08e8138c59b9506562c4431a07fe
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:7564175a982fa64267e1b3b55507c882189ce34dd9842f0fa157f411c7784571
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:75de41cbefd07213c888708a9384106f79a7c1eac9e2f70341542fc8d0a225ea
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:78b0766a7f0a04f717b2aa85ecadc6b2fc26a79d0d37fff3ff106e7eade10082
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:7b5307e6fa0c0b019284862db880097de31a99ea76462e854f3e9a31ac599e0d
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:7c7ed9d426a6372f8307c3df22d8945dfe9b4f4131b810f7130c04923883e875
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:7e830af7495d2bf041be379a4442f8b18f04746396fcd9cf7e24b77992d3dbd3
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:92c0f26b74395d1f5763605c710386ccd178da837e0425efeaf3680128aca7ef
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:932a48da04135a8f18bdaf97588c84d8a94a2bbd530a90061b9618e53afb3c6a
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:a0c17e32a0a9f3749a545bddb052ff529eb46801e973b83df7aa7c5b22f77532
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:a1917c6fec721edfb44290f0d316480782670e3531b9e47ef98e4376c57bb27c
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:a8889be5a497e75f4aa2c70623fc89b7c8a14d23d12cc67f878b64dd568c84ef
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:abd56434c05c63e2248ebad5bbf4026bb260577cbf9b8f5b77df32ffeb95f497
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:b1d6f0ba50960f8a5476e5e9b93ff9f427f567d233266488c784981050865015
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:b33bba453e03b465c5e1f7b57dc5ed7ad79291846d1bd066bae6a05f6107602a
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:b7305d94693c7ae9977fff1072dc61f91ce48417a85394273582505d20e4da7e
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:bcd7c08db93c2c4830d7ab9f9cbb51a8108988583c9ef132dd0b78da0439518a
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:c41d5b1ad197d97309266a5ae77b9e663cbfa3fc973028c2a096e60d193c9a7d
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:c9529d4e6abb11da6bf74ddd8591880c31391ffe816520af2afe7e0c59c89cca
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:d15bb954c71e391b4856d1f2e2161cf46c1dc122740ca4f6ea802143359161f5
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:d522bf953456f8dec03550dd165361b3bf2c182d327c388d36d1a6ee0186c10b
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:d951d50b2212ad0923c63547008a73a95d3f8ed533335e7718fa4ebfbb829a20
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:dc7bcfa9ad1980d83ce4b31f496d5272e842dfaf16d90b0dbff77a3131f8e053
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:df9d7a4927aece444866d6798dbd4545917291410f55e55d83cf81fd255e1ed0
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:dff17bd9c5af163f06425ed2abaf59ab929876496627cdd992c7605de2a95e05
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:e2035a467654d13243ace7cfac05a92617c9b382ea3a223341da15458bbc7b33
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:e2c6305792854196683b452732cb38e54b20a51819da76654873952a3a014889
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:e868cb08c35c7315d052729714cf513493df981d6479d1b3a939e25b439fd11a
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:e8bff8af15053478968207a8f3ea401a71222a1a296726000a0ae0cc2349cde1
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:e8ea413061259bdfb192cbfd5919d98d84875f60c5467db0f6f82b7719c9f853
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:ec50d1e42178494f9395b0ca66a2ec048707cdae81a5a87ad95ef47b35e50501
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:ef378293c79cf0cbb6c7f7c178c0abff15fcee573f8b5b3e7a891ba8d77927d8
      copy | will export projects.registry.vmware.com/tanzu_serverless/release@sha256:fb8d6142ed40b3e86f498f1d073d9f81b538316f5de4332f5feab9b6ec7bdb6c
      copy | exported 44 images
      copy | importing 44 images...
    
      990.85 MiB / 990.85 MiB [====================================================================================================] 100.00% 232.05 MiB/s 4s
    
      copy | done uploading images
      Succeeded
    
    
    • Validate the generated lock file
     $ cat relocated.lock
      ---
      apiVersion: imgpkg.carvel.dev/v1alpha1
      bundle:
      image: index.docker.io/dineshtripathi30/cnr@sha256:d951d50b2212ad0923c63547008a73a95d3f8ed533335e7718fa4ebfbb829a20
      kind: BundleLock
    
    
    • Run the installation script
    # Note: We are using tanzu-system-ingress namespace because our contour is running there and we want to use the existing one
    $ cnr_ingress__reuse_crds=true cnr_ingress__external__namespace=tanzu-system-ingress cnr_ingress__internal__namespace=tanzu-system-ingress ./bin/install.sh
    
     cnr_ingress__reuse_crds=true cnr_ingress__external__namespace=tanzu-system-ingress cnr_ingress__internal__namespace=tanzu-system-ingress ./bin/install.sh
    
      ~/dinesh/cloud-native-runtimes ~/dinesh/cloud-native-runtimes
      namespace/cloud-native-runtimes created
      Target cluster 'https://10.0.4.28:6443' (nodes: tkg-ss-cluster-control-plane-2w82w, 8+)
      resolve | final: projects.registry.vmware.com/tanzu_serverless/release@sha256:d951d50b2212ad0923c63547008a73a95d3f8ed533335e7718fa4ebfbb829a20 -> projects.registry.vmware.com/tanzu_serverless/release@sha256:d951d50b2212ad0923c63547008a73a95d3f8ed533335e7718fa4ebfbb829a20
    
      12:33:56PM: info: Resources: Ignoring group version: schema.GroupVersionResource{Group:"stats.antrea.tanzu.vmware.com", Version:"v1alpha1", Resource:"antreanetworkpolicystats"}
      12:33:56PM: info: Resources: Ignoring group version: schema.GroupVersionResource{Group:"stats.antrea.tanzu.vmware.com", Version:"v1alpha1", Resource:"networkpolicystats"}
      12:33:56PM: info: Resources: Ignoring group version: schema.GroupVersionResource{Group:"stats.antrea.tanzu.vmware.com", Version:"v1alpha1", Resource:"antreaclusternetworkpolicystats"}
    
      Changes
    
      Namespace              Name                   Kind                Conds.  Age  Op      Op st.  Wait to    Rs  Ri
      (cluster)              cnr-role-binding       ClusterRoleBinding  -       -    create  -       reconcile  -   -
      cloud-native-runtimes  cloud-native-runtimes  App                 -       -    create  -       reconcile  -   -
      ^                      cnr-sa                 ServiceAccount      -       -    create  -       reconcile  -   -
      ^                      cnr-values             Secret              -       -    create  -       reconcile  -   -
    
      Op:      4 create, 0 delete, 0 update, 0 noop
      Wait to: 4 reconcile, 0 delete, 0 noop
    
      Continue? [yN]: y
    
      12:34:08PM: ---- applying 3 changes [0/4 done] ----
      12:34:08PM: create clusterrolebinding/cnr-role-binding (rbac.authorization.k8s.io/v1) cluster
      12:34:08PM: create secret/cnr-values (v1) namespace: cloud-native-runtimes
      12:34:08PM: create serviceaccount/cnr-sa (v1) namespace: cloud-native-runtimes
      12:34:08PM: ---- waiting on 3 changes [0/4 done] ----
      12:34:08PM: ok: reconcile serviceaccount/cnr-sa (v1) namespace: cloud-native-runtimes
      12:34:08PM: ok: reconcile clusterrolebinding/cnr-role-binding (rbac.authorization.k8s.io/v1) cluster
      12:34:08PM: ok: reconcile secret/cnr-values (v1) namespace: cloud-native-runtimes
      12:34:08PM: ---- applying 1 changes [3/4 done] ----
      12:34:08PM: create app/cloud-native-runtimes (kappctrl.k14s.io/v1alpha1) namespace: cloud-native-runtimes
      12:34:08PM: ---- waiting on 1 changes [3/4 done] ----
      12:34:08PM: ongoing: reconcile app/cloud-native-runtimes (kappctrl.k14s.io/v1alpha1) namespace: cloud-native-runtimes
      12:34:08PM:  ^ Waiting for generation 1 to be observed
      12:34:15PM: ongoing: reconcile app/cloud-native-runtimes (kappctrl.k14s.io/v1alpha1) namespace: cloud-native-runtimes
      12:34:15PM:  ^ Reconciling
      12:35:08PM: ---- waiting on 1 changes [3/4 done] ----
      12:35:11PM: ok: reconcile app/cloud-native-runtimes (kappctrl.k14s.io/v1alpha1) namespace: cloud-native-runtimes
      12:35:11PM: ---- applying complete [4/4 done] ----
      12:35:11PM: ---- waiting complete [4/4 done] ----
    
      Succeeded
      ~/dinesh/cloud-native-runtimes
    
    

    1.3 Validate the Cloud Native Runtime Installation

     $ k get ns
      NAME                             STATUS   AGE
      avi-system                       Active   9d
      cert-manager                     Active   9d
      cloud-native-runtimes            Active   3m24s
      default                          Active   9d
      external-dns                     Active   2d4h
      knative-discovery                Active   2m52s
      knative-eventing                 Active   3m
      knative-serving                  Active   3m3s
      knative-sources                  Active   2m54s
      kube-node-lease                  Active   9d
      kube-public                      Active   9d
      kube-system                      Active   9d
      pinniped-concierge               Active   9d
      pinniped-supervisor              Active   9d
      tanzu-package-repo-global        Active   9d
      tanzu-system-dashboards          Active   7d10h
      tanzu-system-ingress             Active   8d
      tanzu-system-logging             Active   7d10h
      tanzu-system-monitoring          Active   7d18h
      tanzu-system-registry            Active   21h
      tanzu-system-service-discovery   Active   2d4h
      tkg-system                       Active   9d
      tkg-system-public                Active   9d
      triggermesh                      Active   2m52s
      vmware-sources                   Active   2m53s
    
      # Newly created namespace 
      # cloud-native-runtimes, knative-discovery, knative-eventing, knative-serving, knative-sources, triggermesh, vmware-sources
    
    # Ensure all the pods are running , you can use below command for quick check
    $ k get po -A
    
    

2. Testing Cloud Native Runtime for Serverless

Test Scenario : In this testing example, i will be only testing Knative Serving part.

  • Testing Knative Serving
  • Let’s test Knative serving by following the below steps.
    # name of the namespace 
    $ export WORKLOAD_NAMESPACE='cnr-demo'
   $  kubectl create namespace ${WORKLOAD_NAMESPACE}
      namespace/cnr-demo created
    
    # Creating Knative Service
   $ kn service create hello-yeti -n ${WORKLOAD_NAMESPACE}   --image projects.registry.vmware.com/tanzu_serverless/hello-yeti@sha256:17d640edc48776cfc604a14fbabf1b4f88443acc580052eef3a753751ee31652 --env TARGET='hello-yeti'
   Creating service 'hello-yeti' in namespace 'cnr-demo':

  0.080s The Route is still working to reflect the latest desired specification.
  0.132s ...
  0.160s Configuration "hello-yeti" is waiting for a Revision to become ready.
  8.520s ...
  8.593s Ingress has not yet been reconciled.
  8.775s Waiting for Envoys to receive Endpoints data.
  9.067s Waiting for load balancer to be ready
  9.319s Ready to serve.

    Service 'hello-yeti' created to latest revision 'hello-yeti-00001' is available at URL:
    http://hello-yeti.cnr-demo.example.com
 
  $  export EXTERNAL_ADDRESS=$(kubectl get service envoy -n tanzu-system-ingress \
  --output 'jsonpath={.status.loadBalancer.ingress[0].ip}')

  $ echo $EXTERNAL_ADDRESS

   
  $curl -H "Host: hello-yeti.${WORKLOAD_NAMESPACE}.example.com" ${EXTERNAL_ADDRESS}
              ______________
            /               \
           |   hello from    |
           |  cloud native   |
           |    runtimes     |     .xMWxw.
            \______________\ |   wY     Ym.
                            \|  C  ,  ,   O
                                 \  ww   /.
                               ..x       x..
                              .x   wwwww    x.
                             .x               x.
                             x   \         /   x
                             Y   Y         Y   Y
                              wwv    x      vww
                                \    /\    /
                                :www:  :www:
   # List the pod created by knative serving app
   $ k get po -n cnr-demo
    NAME                                          READY   STATUS    RESTARTS   AGE
    hello-yeti-00001-deployment-bb4f68d54-jrbf6   2/2     Running   0          50s
  # List knative service in cnr-demo namespace
    $ k get ksvc -n cnr-demo
    NAME         URL                                      LATESTCREATED      LATESTREADY        READY   REASON
    hello-yeti   http://hello-yeti.cnr-demo.example.com   hello-yeti-00001   hello-yeti-00001   True
  
  # List the proxy created by Contour
   $ k get proxy -n cnr-demo
    NAME                                                       FQDN                                    TLS SECRET   STATUS   STATUS DESCRIPTION
    hello-yeti-contour-hello-yeti.cnr-demo                     hello-yeti.cnr-demo                                  valid    Valid HTTPProxy
    hello-yeti-contour-hello-yeti.cnr-demo.example.com         hello-yeti.cnr-demo.example.com                      valid    Valid HTTPProxy
    hello-yeti-contour-hello-yeti.cnr-demo.svc                 hello-yeti.cnr-demo.svc                              valid    Valid HTTPProxy
    hello-yeti-contour-hello-yeti.cnr-demo.svc.cluster.local   hello-yeti.cnr-demo.svc.cluster.local                valid    Valid HTTPProxy

For few mins, if there is no traffic for application, Pod will be deleted. Check the pod once again.

 # List the pods in cnr-demo namespace
 $ k get po -n cnr-demo
 No resources found in cnr-demo namespace.

Let’s send some traffic again. Run the curl command again.

 # Access the deployed serverless application
 $ curl -H "Host: hello-yeti.${WORKLOAD_NAMESPACE}.example.com" ${EXTERNAL_ADDRESS}
             ______________
           /               \
          |   hello from    |
          |  cloud native   |
          |    runtimes     |     .xMWxw.
           \______________\ |   wY     Ym.
                           \|  C  ,  ,   O
                                \  ww   /.
                              ..x       x..
                             .x   wwwww    x.
                            .x               x.
                            x   \         /   x
                            Y   Y         Y   Y
                             wwv    x      vww
                               \    /\    /
                               :www:  :www:
   # List the pods
   $ k get po -n cnr-demo
   NAME                                          READY   STATUS    RESTARTS   AGE
   hello-yeti-00001-deployment-bb4f68d54-nlzh5   2/2     Running   0          5s

Validate app using browser.

See the pod AGE, just got created when application received traffic. If you keep monitoring the pod status, you will see that it goes to Termiating state soon.

 # List the pods
 $ k get po -n cnr-demo
   NAME                                          READY   STATUS        RESTARTS   AGE
   hello-yeti-00001-deployment-bb4f68d54-nlzh5   1/2     Terminating   0          92s

3. Reference Links

One thought on “Installing Cloud Native Runtime for Tanzu on Tanzu Kubernetes Cluster and deploying an application to test Serverless

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s