This guide will walk through the steps to upgrade the version of Ceph in a Rook cluster. Rook and Ceph upgrades are designed to ensure data remains available even while the upgrade is proceeding. Rook will perform the upgrades in a rolling fashion such that application pods are not disrupted.
Rook is cautious when performing upgrades. When an upgrade is requested (the Ceph image has been updated in the CR), Rook will go through all the daemons one by one and will individually perform checks on them. It will make sure a particular daemon can be stopped before performing the upgrade. Once the deployment has been updated, it checks if this is ok to continue. After each daemon is updated we wait for things to settle (monitors to be in a quorum, PGs to be clean for OSDs, up for MDSes, etc.), then only when the condition is met we move to the next daemon. We repeat this process until all the daemons have been updated.
Rook v1.13 supports the following Ceph versions:
Support for Ceph Pacific (16.2.x) is removed in Rook v1.13. Upgrade to Quincy or Reef before upgrading to Rook v1.13.
!!! important
When an update is requested, the operator will check Ceph's status,
**if it is in `HEALTH_ERR` the operator will refuse to proceed with the upgrade.**
!!! warning
Ceph v17.2.2 has a blocking issue when running with Rook. Use v17.2.3 or newer when possible.
In Ceph Quincy (v17), the device_health_metrics
pool was renamed to .mgr
. Ceph will perform this
migration automatically. The pool rename will be automatically handled by Rook if the configuration
of the device_health_metrics
pool is not customized via CephBlockPool.
If the configuration of the device_health_metrics
pool is customized via CephBlockPool, two extra
steps are required after the Ceph upgrade is complete. Once upgrade is complete:
.mgr
built-in pool. For an example, see
builtin mgr pool.device_health_metrics
pool.Ceph Quincy v17.2.1 has a potentially breaking regression with CephNFS. See the NFS documentation's known issue for more detail.
Official Ceph container images can be found on Quay.
These images are tagged in a few ways:
v17.2.6-20231027
).
These tags are recommended for production clusters, as there is no possibility for the cluster to
be heterogeneous with respect to the version of Ceph running in containers.v17
) are useful for development and test clusters so that the
latest version of Ceph is always available.Ceph containers other than the official images from the registry above will not be supported.
The upgrade will be automated by the Rook operator after the desired Ceph image is changed in the
CephCluster CRD (spec.cephVersion.image
).
ROOK_CLUSTER_NAMESPACE=rook-ceph
NEW_CEPH_IMAGE='quay.io/ceph/ceph:v17.2.6-20231027'
kubectl -n $ROOK_CLUSTER_NAMESPACE patch CephCluster $ROOK_CLUSTER_NAMESPACE --type=merge -p "{\"spec\": {\"cephVersion\": {\"image\": \"$NEW_CEPH_IMAGE\"}}}"
Since the Rook toolbox is not controlled by
the Rook operator, users must perform a manual upgrade by modifying the image
to match the ceph version
employed by the new Rook operator release. Employing an outdated Ceph version within the toolbox may result
in unexpected behaviour.
kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v17.2.6-20231027
As with upgrading Rook, now wait for the upgrade to complete. Status can be determined in a similar way to the Rook upgrade as well.
watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{.metadata.name}{" \treq/upd/avl: "}{.spec.replicas}{"/"}{.status.updatedReplicas}{"/"}{.status.readyReplicas}{" \tceph-version="}{.metadata.labels.ceph-version}{"\n"}{end}'
Confirm the upgrade is completed when the versions are all on the desired Ceph version.
kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"ceph-version="}{.metadata.labels.ceph-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
ceph-version=v16.2.14-0
ceph-version=v17.2.6-0
This cluster is finished:
ceph-version=v17.2.6-0
Verify the Ceph cluster's health using the health verification.