In Part 1, I discussed about setting up Cloud Native Runtime for Tanzu and demonstrated Knative Serving component. In this post, I will demonstrate Knative Eventing feature and it will use Serving as well to serve the request.
I am using the same lab environment that i setup in Part 1. So, Cloud Native Runtime for Tanzu is running on a TKG cluster.
Knative Eventing Flow
Below diagram shows the different components involved in eventing, I will discuss one by one and also create them.

a quick walkthrough of different components:
Broker: Brokers are Kubernetes custom resources that define an event mesh for collecting a pool of CloudEvents. Brokers provide a discoverable endpoint, status.address
, for event ingress, and triggers for event delivery. Event producers can send events to a broker by POSTing the event to the status.address.url
of the broker.
Trigger: Trigger represents a request to have events delivered to a subscriber from a Broker’s event pool.
Event Producer: PingSource describes an event source with a fixed payload produced on a specified cron schedule.
Subscribers: Subscribers are the worker basically, they serve the request eventually. In this post, i will create Knative serving service.
Testing Cloud Native Runtime for Tanzu – Knative Eventing
In this example, I will use in-memory broker for simplicity purpose. in-memory broker is a fast and lightweight way to verify that the basic elements of Knative Eventing are working.
Pre-requirements
- Create a namespace
# Export namespace variable
$ export WORKLOAD_NAMESPACE=cnr-demo
$ k create ns ${WORKLOAD_NAMESPACE}
namespace/cnr-demo created
- Create a role binding, Use the below yaml content for creating the role binding and assign clusterrole.
kubectl apply -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ${WORKLOAD_NAMESPACE}-psp
namespace: ${WORKLOAD_NAMESPACE}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cnr-restricted
subjects:
- kind: Group
name: system:serviceaccounts:${WORKLOAD_NAMESPACE}
EOF
Once you create the above resource, you will see the following output.
rolebinding.rbac.authorization.k8s.io/cnr-demo-psp created
Testing Steps
- Create a broker
kubectl apply -f - << EOF
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: ${WORKLOAD_NAMESPACE}
EOF
Once you create_ the above resource, you will see the following output.
broker.eventing.knative.dev/default created
# List the broker
$ k get broker -n cnr-demo
NAME URL AGE READY REASON
default http://broker-ingress.knative-eventing.svc.cluster.local/cnr-demo/default 9m10s True
- Create a consumer for the event
cat <<EOF | kubectl create -f -
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display
namespace: ${WORKLOAD_NAMESPACE}
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
EOF
Once you create_ the above resource, you will see the following output.
service.serving.knative.dev/event-display created
# List the knative serving service
$ k get ksvc -n cnr-demo
NAME URL LATESTCREATED LATESTREADY READY REASON
event-display http://event-display.cnr-demo.example.com event-display-00001 event-display-00001 True
- Create a Trigger
kubectl apply -f - << EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: event-display
namespace: ${WORKLOAD_NAMESPACE}
spec:
broker: default
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
namespace: ${WORKLOAD_NAMESPACE}
EOF
Once you create_ the above resource, you will see the following output.
trigger.eventing.knative.dev/event-display created
# List the Trigger
$ k get trigger -n cnr-demo
NAME BROKER SUBSCRIBER_URI AGE READY REASON
event-display default http://event-display.cnr-demo.svc.cluster.local 24s True
- Create an Event Producer
kubectl apply -f - << EOF
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
name: test-ping-source
namespace: ${WORKLOAD_NAMESPACE}
spec:
schedule: "*/1 * * * *"
data: '{"message": "Hello Eventing!"}'
sink:
ref:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
namespace: ${WORKLOAD_NAMESPACE}
EOF
Once you create_ the above resource, you will see the following output.
pingsource.sources.knative.dev/test-ping-source created
# List the pingsource (Event Producer)
$ k get pingsource -n cnr-demo
NAME SINK SCHEDULE AGE READY REASON
test-ping-source http://broker-ingress.knative-eventing.svc.cluster.local/cnr-demo/default */1 * * * * 40s False MinimumReplicasUnavailable
- Verify the consumer log
$ kubectl logs -l serving.knative.dev/service=event-display -c user-container -n cnr-demo --since=10m --tail=50
$ kubectl logs -l serving.knative.dev/service=event-display -c user-container -n cnr-demo --since=10m --tail=50
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.sources.ping
source: /apis/v1/namespaces/cnr-demo/pingsources/test-ping-source
id: 1df74a08-b912-4ac5-89ed-5a0ba598a744
time: 2021-09-26T15:29:00.319361261Z
Extensions,
knativearrivaltime: 2021-09-26T15:29:00.322158884Z
Data,
{"message": "Hello Eventing!"}
☁️ cloudevents.Event
Validation: valid
Context Attributes,
specversion: 1.0
type: dev.knative.sources.ping
source: /apis/v1/namespaces/cnr-demo/pingsources/test-ping-source
id: 903db2ec-d0ea-46e3-802d-de5d22687183
time: 2021-09-26T15:30:00.426448334Z
Extensions,
knativearrivaltime: 2021-09-26T15:30:00.425966855Z
Data,
{"message": "Hello Eventing!"}
- See the list of pods serving the request.
$ k get po -n cnr-demo
NAME READY STATUS RESTARTS AGE
event-display-00001-deployment-6bb48c9555-jjtzv 2/2 Running 0 2m26s
That’s all for Eventing demo. You can also try out different event source like TriggerMesh.