cephCluster
CR so that new OSDs
are built with it.cephCluster
CR. The migration process should include destroying the
existing OSDs one by one, wiping the drive, then recreating a new OSD on
that drive with the same ID.bluestore-rdr
. In the future, other backends will also need to be supported,
such as seastore
from the Crimson effort.Migration of OSDs with following configurations is deferred for now and will considered in the future:
Add spec.storage.store
in the Ceph cluster YAML.
storage:
store:
type: bluestore-rdr
updateStore: yes-really-update-store
type
: The backend to be used for OSDs: bluestore
, bluestore-rdr
,
etc. The default type will be bluestore
updateStore
: Allows the operator to migrate existing OSDs to a different
backend. This field can only take the value yes-really-update-store
. If
the user wants to change the store.type
field for an existing cluster,
they will also need to update spec.storage.store.updateStore
with yes-really-update-store
.Add status.storage.osd
to the Ceph cluster status. This will help convey the progress
of OSD migration
status:
storage:
osd:
storeType:
bluestore: 3
bluestore-rdr: 5
storeType.bluestore
: Total number of BlueStore OSDs runningstoreType.bluestore-rdr
: Total number of BlueStore-rdr OSDs runningphase
should be set to progressing
while OSDs are migratingThe migration process will involve destroying existing OSDs one by one, wiping
the drives, deploying a new OSD with the same ID, then waiting for all PGs
to be active+clean
before migrating the next OSD. Since this operation
involves possible impact or downtime, users should be exercise caution
before proceeding with this action.
NOTE: Once the OSDs are migrated to a new backend, say bluestore-rdr
, they
won't be allowed to be migrated back to the legacy store (BlueStore).
osd-store:<osd store type>
to all OSD pods.spec.Storage.store.type
).spec.storage.store.type
as an
environment variable in the OSD prepare job. If no OSD store is provided in
the spec, then set the environment variable to bluestore
.ceph-volume
command.RAW MODE
ceph-volume raw prepare <OSD_STORE_ENV_VARIABLE> --data /dev/vda
LVM MODE
ceph-volume lvm prepare <OSD_STORE_ENV_VARIABLE> --data /dev/vda
The ceph-volume activate
command doesn't require the OSD backend to be passed
as an argument. It auto-detects the backend that was used during when the OSD
was prepared.
bluestore-rdr
, admins can migrate bluestore
OSDs to bluestore-rdr
.In order to migrate OSDs to use bluestore-rdr
, admins must patch the
Ceph cluster spec as below:
storage:
store:
type: bluestore-rdr
updateStore: yes-really-update-store
The operator's reconciler will replace one OSD at a time. A configmap will be used to store the OSD ID currently being migrated. OSD replacement steps:
osd-store:<osd store type>
does not match spec.storage.store.type
.active+clean
, do not proceed.active+clean
but a previous OSD replacement is not completed, do not proceed.active+clean
and no replacement is in progress, then select an OSD to be migrated.The OSD prepare pod job will destroy an OSD using following steps:
ceph volume list
to fetch the OSD path.Destroy the OSD using following command:
ceph osd destroy <OSD_ID> --yes-i-really-mean-it
Wipe the OSD drive. This removes all the data on the device.
Prepare the OSD with the new store type by using the same OSD ID. This is done by passing the OSD ID as --osd-id
flag to ceph-volume
command.
ceph-volume lvm prepare --osd-id <OSD_ID> --data <OSD_PATH>
These changes require ignificant developemnt effort to migrate existing OSDs to use a new backend. They will be divided into following phases:
ceph-volume batch
which adds additional complications).