{{ template "generatedDocsWarning" . }}
Creates Rook resources to configure a Ceph cluster using the Helm package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as:
The helm install
command deploys rook on the Kubernetes cluster in the default configuration.
The configuration section lists the parameters that can be configured during installation. It is
recommended that the rook operator be installed into the rook-ceph
namespace. The clusters can be installed
into the same namespace as the operator or a separate namespace.
Rook currently publishes builds of this chart to the release
and master
channels.
Before installing, review the values.yaml to confirm if the default settings need to be updated.
rook-ceph
, the namespace
must be set in the operatorNamespace
variable.cephClusterSpec
. The defaults
are only an example and not likely to apply to your cluster.monitoring
section should be removed from the cephClusterSpec
, as it is specified separately in the helm settings.cephBlockPools
, cephFileSystems
, and CephObjectStores
will create one of each, and their corresponding storage classes.The release channel is the most recent release of Rook that is considered stable for the community.
The example install assumes you have first installed the Rook Operator Helm Chart and created your customized values.yaml.
helm repo add rook-release https://charts.rook.io/release
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \
--set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml
!!! Note
--namespace specifies the cephcluster namespace, which may be different from the rook operator namespace.
The following table lists the configurable parameters of the rook-operator chart and their default values.
{{ template "chart.valuesTable" . }}
The CephCluster
CRD takes its spec from cephClusterSpec.*
. This is not an exhaustive list of parameters.
For the full list, see the Cluster CRD topic.
The cluster spec example is for a converged cluster where all the Ceph daemons are running locally,
as in the host-based example (cluster.yaml). For a different configuration such as a
PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml),
or stretch cluster (cluster-stretched.yaml), replace this entire cephClusterSpec
with the specs from those examples.
The cephBlockPools
array in the values file will define a list of CephBlockPool as described in the table below.
Parameter | Description | Default |
---|---|---|
name |
The name of the CephBlockPool | ceph-blockpool |
spec |
The CephBlockPool spec, see the CephBlockPool documentation. | {} |
storageClass.enabled |
Whether a storage class is deployed alongside the CephBlockPool | true |
storageClass.isDefault |
Whether the storage class will be the default storage class for PVCs. See PersistentVolumeClaim documentation for details. | true |
storageClass.name |
The name of the storage class | ceph-block |
storageClass.parameters |
See Block Storage documentation or the helm values.yaml for suitable values | see values.yaml |
storageClass.reclaimPolicy |
The default Reclaim Policy to apply to PVCs created with this storage class. | Delete |
storageClass.allowVolumeExpansion |
Whether volume expansion is allowed by default. | true |
storageClass.mountOptions |
Specifies the mount options for storageClass | [] |
storageClass.allowedTopologies |
Specifies the allowedTopologies for storageClass | [] |
The cephFileSystems
array in the values file will define a list of CephFileSystem as described in the table below.
Parameter | Description | Default |
---|---|---|
name |
The name of the CephFileSystem | ceph-filesystem |
spec |
The CephFileSystem spec, see the CephFilesystem CRD documentation. | see values.yaml |
storageClass.enabled |
Whether a storage class is deployed alongside the CephFileSystem | true |
storageClass.name |
The name of the storage class | ceph-filesystem |
storageClass.pool |
The name of Data Pool, without the filesystem name prefix | data0 |
storageClass.parameters |
See Shared Filesystem documentation or the helm values.yaml for suitable values | see values.yaml |
storageClass.reclaimPolicy |
The default Reclaim Policy to apply to PVCs created with this storage class. | Delete |
storageClass.mountOptions |
Specifies the mount options for storageClass | [] |
The cephObjectStores
array in the values file will define a list of CephObjectStore as described in the table below.
Parameter | Description | Default |
---|---|---|
name |
The name of the CephObjectStore | ceph-objectstore |
spec |
The CephObjectStore spec, see the CephObjectStore CRD documentation. | see values.yaml |
storageClass.enabled |
Whether a storage class is deployed alongside the CephObjectStore | true |
storageClass.name |
The name of the storage class | ceph-bucket |
storageClass.parameters |
See Object Store storage class documentation or the helm values.yaml for suitable values | see values.yaml |
storageClass.reclaimPolicy |
The default Reclaim Policy to apply to PVCs created with this storage class. | Delete |
ingress.enabled |
Enable an ingress for the object store | false |
ingress.annotations |
Ingress annotations | {} |
ingress.host.name |
Ingress hostname | "" |
ingress.host.path |
Ingress path prefix | / |
ingress.tls |
Ingress tls | / |
ingress.ingressClassName |
Ingress tls | "" |
If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster:
Extract the spec
section of your existing CephCluster CR and copy to the cephClusterSpec
section in values.yaml
.
Add the following annotations and label to your existing CephCluster CR:
annotations:
meta.helm.sh/release-name: rook-ceph-cluster
meta.helm.sh/release-namespace: rook-ceph
labels:
app.kubernetes.io/managed-by: Helm
Run the helm install
command in the Installing section to create the chart.
In the future when updates to the cluster are needed, ensure the values.yaml always contains the desired CephCluster spec.
To deploy from a local build from your development environment:
cd deploy/charts/rook-ceph-cluster
helm install --create-namespace --namespace rook-ceph rook-ceph-cluster -f values.yaml .
To see the currently installed Rook chart:
helm ls --namespace rook-ceph
To uninstall/delete the rook-ceph-cluster
chart:
helm delete --namespace rook-ceph rook-ceph-cluster
The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster
chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory
(/var/lib/rook
by default) and on OSD raw devices is kept. To reuse disks, you will have to wipe them before recreating the cluster.
See the teardown documentation for more information.