Introduction
Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management.
Installing OpenShift Data Foundation
Operator

Click on Install
button and leave all options as default

Scroll down the page after reviewing the settings and click on Install
. You will notice that the installation is in progress.

Wait for the installation to complete. Approx ~3 mins.
You can also try validating the installation progress by running the following command and ensuring all the pods are in a running state.
oc get all -n openshift-storage
NAME READY STATUS RESTARTS AGE
pod/csi-addons-controller-manager-757fcc4bd9-pld42 2/2 Running 0 97s
pod/noobaa-operator-f85f6c966-chxxr 1/1 Running 0 2m10s
pod/ocs-metrics-exporter-899c8c4fb-kf48b 1/1 Running 0 80s
pod/ocs-operator-55c67ffddf-7wsh2 1/1 Running 0 2m2s
pod/odf-console-6988685bcd-q58b9 1/1 Running 0 2m14s
pod/odf-operator-controller-manager-5b4dd8474f-l7gwm 2/2 Running 0 2m14s
pod/rook-ceph-operator-6b7b4675bf-6jfpz 1/1 Running 0 2m2s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/csi-addons-controller-manager-metrics-service ClusterIP 172.30.191.182 <none> 8443/TCP 2m24s
service/noobaa-operator-service ClusterIP 172.30.30.102 <none> 443/TCP 2m11s
service/odf-console-service ClusterIP 172.30.4.53 <none> 9001/TCP 99s
service/odf-operator-controller-manager-metrics-service ClusterIP 172.30.200.119 <none> 8443/TCP 2m20s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/csi-addons-controller-manager 1/1 1 1 2m19s
deployment.apps/noobaa-operator 1/1 1 1 2m10s
deployment.apps/ocs-metrics-exporter 1/1 1 1 2m2s
deployment.apps/ocs-operator 1/1 1 1 2m2s
deployment.apps/odf-console 1/1 1 1 2m14s
deployment.apps/odf-operator-controller-manager 1/1 1 1 2m15s
deployment.apps/rook-ceph-operator 1/1 1 1 2m2s
NAME DESIRED CURRENT READY AGE
replicaset.apps/csi-addons-controller-manager-757fcc4bd9 1 1 1 97s
replicaset.apps/csi-addons-controller-manager-85965bbf99 0 0 0 2m19s
replicaset.apps/noobaa-operator-f85f6c966 1 1 1 2m10s
replicaset.apps/ocs-metrics-exporter-899c8c4fb 1 1 1 2m2s
replicaset.apps/ocs-operator-55c67ffddf 1 1 1 2m2s
replicaset.apps/odf-console-6988685bcd 1 1 1 2m14s
replicaset.apps/odf-operator-controller-manager-5b4dd8474f 1 1 1 2m14s
replicaset.apps/rook-ceph-operator-6b7b4675bf 1 1 1 2m2s
Installation is completed. You will see a green tick mark.

Next, we will create custom resources before ODF can be used. Before that, you will also notice a popup on the top right corner of an OCP console about new web console updates. e.g. one I see at my side.

Let’s refresh the web console by clicking on the Refresh web console
the option is shown above.
Once the OCP web console is reloaded, Navigate to Storage
and you will notice the Data Foundation
option added there. Click on it. We still do not see much data as it’s installed just now.

Now, we will create the required CR’s to use the ODF operator.
Navigate to Operators -> Installed Operators
and click on the Open Data Foundation
Operator.

Click on Create StorageSystem
button.

Click Next

Notice the node requirements in the above screenshot. Choose the correct nodes (minimum 3 nodes) and click Next.

Leave the default selection and click Next. Review the summary.

Click Create StorageSystem

It will take few mins to get all the required pods created and running. You can validate the list of pods running inside openshift-storage
namespace by running the following command.
oc get po -n openshift-storage
NAME READY STATUS RESTARTS AGE
csi-addons-controller-manager-757fcc4bd9-tnzsq 2/2 Running 1 (43m ago) 92m
csi-cephfsplugin-gz8kg 3/3 Running 0 78m
csi-cephfsplugin-nv5kn 3/3 Running 0 78m
csi-cephfsplugin-provisioner-747f9b4884-8zb5k 6/6 Running 0 72m
csi-cephfsplugin-provisioner-747f9b4884-sl7sh 6/6 Running 4 (43m ago) 78m
csi-cephfsplugin-qwwmr 3/3 Running 0 78m
csi-cephfsplugin-znqkh 3/3 Running 0 78m
csi-rbdplugin-jj45d 4/4 Running 0 78m
csi-rbdplugin-provisioner-67c84f6f8c-ldn5m 7/7 Running 0 72m
csi-rbdplugin-provisioner-67c84f6f8c-nsr9l 7/7 Running 3 (43m ago) 78m
csi-rbdplugin-v5b2p 4/4 Running 0 78m
csi-rbdplugin-z4rtj 4/4 Running 0 78m
csi-rbdplugin-zz8zc 4/4 Running 0 78m
noobaa-core-0 1/1 Running 0 72m
noobaa-db-pg-0 1/1 Running 0 72m
noobaa-endpoint-55ccb457b4-xxtxg 1/1 Running 0 69m
noobaa-operator-f85f6c966-4gjx4 1/1 Running 1 (72m ago) 92m
ocs-metrics-exporter-899c8c4fb-hr5xv 1/1 Running 0 92m
ocs-operator-55c67ffddf-268jd 1/1 Running 1 (43m ago) 92m
odf-console-6988685bcd-pkdl4 1/1 Running 0 92m
odf-operator-controller-manager-5b4dd8474f-zn5zc 2/2 Running 1 (43m ago) 92m
rook-ceph-crashcollector-truscaleocp-rr9p9-worker-5nl2m-bfrjzsl 1/1 Running 0 73m
rook-ceph-crashcollector-truscaleocp-rr9p9-worker-dhj42-fb54zmk 1/1 Running 0 73m
rook-ceph-crashcollector-truscaleocp-rr9p9-worker-hgb2x-6c6wnbq 1/1 Running 0 70m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7d5cd76bz46bz 2/2 Running 0 72m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-7b9b8767fqnwg 2/2 Running 0 72m
rook-ceph-mgr-a-756646ddd6-jl8kc 2/2 Running 0 72m
rook-ceph-mon-a-6545699968-qgvhh 2/2 Running 0 72m
rook-ceph-mon-b-5bb55b8df4-7np9w 2/2 Running 0 76m
rook-ceph-mon-c-7d84bd457b-9jb5p 2/2 Running 0 76m
rook-ceph-operator-6b7b4675bf-lj9vs 1/1 Running 0 92m
rook-ceph-osd-0-68d5d54b7d-s4896 2/2 Running 0 73m
rook-ceph-osd-1-d4976fcf-jnpt2 2/2 Running 0 72m
rook-ceph-osd-2-6bf5776577-2dtgr 2/2 Running 0 73m
rook-ceph-osd-prepare-a60be363705b588ec09ce71e83046fbc-8mnnt 0/1 Completed 0 73m
rook-ceph-osd-prepare-fd6423be76a73de850c08b107beafbfc-5mnt2 0/1 Completed 0 73m
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5757649hmvhv 2/2 Running 0 72m
Validate the Storage from Data Foundation page inside OCP web console. Navigate to Storage
and click on Data Foundation
.

Once you click on the Storage Capacity name, it will display the details.

Now, Let’s validate. In order to validate the ODF setup, we will create a pvc
.
Creating a PVC
Click on Storage->PersistentVolumeClaims
and fill the details.

As you can see in the screenshot above, there are two additional storage class created and their description is self explanatory. Here, we will use the ocs-storagecluster-cephfs
. Click Create.
You will notice that the PVC is created successfully and it is also in the Bound
state as persistent volume got created.

Below is the volume detail.

That’s all for this post.