summaryrefslogtreecommitdiffstats
path: root/doc/ceph-volume
diff options
context:
space:
mode:
authorZac Dover <zac.dover@gmail.com>2022-10-26 05:14:00 +0200
committerZac Dover <zac.dover@gmail.com>2022-10-26 07:23:32 +0200
commit447967ea0d9b11314d1797f525ab5a11bdeea59e (patch)
treecf7c9c07b16ae9f47bd496e6c2b6b0f6cc70071b /doc/ceph-volume
parentMerge pull request #40363 from orozery/rbd-clone-encryption (diff)
downloadceph-447967ea0d9b11314d1797f525ab5a11bdeea59e.tar.xz
ceph-447967ea0d9b11314d1797f525ab5a11bdeea59e.zip
doc/ceph-volume: refine "filestore" section
This commit refines the "filestore" section in the doc/ceph-volume/lvm/prepare.rst file. Signed-off-by: Zac Dover <zac.dover@gmail.com>
Diffstat (limited to 'doc/ceph-volume')
-rw-r--r--doc/ceph-volume/lvm/prepare.rst106
1 files changed, 63 insertions, 43 deletions
diff --git a/doc/ceph-volume/lvm/prepare.rst b/doc/ceph-volume/lvm/prepare.rst
index 21cae4ee500..a74e6992267 100644
--- a/doc/ceph-volume/lvm/prepare.rst
+++ b/doc/ceph-volume/lvm/prepare.rst
@@ -98,78 +98,98 @@ a volume group and a logical volume using the following convention:
``filestore``
-------------
-This is the OSD backend that allows preparation of logical volumes for
-a :term:`filestore` objectstore OSD.
+``filestore`` is the OSD backend that prepares logical volumes for a
+:term:`filestore`-backed object-store OSD.
-It can use a logical volume for the OSD data and a physical device, a partition
-or logical volume for the journal. A physical device will have a logical volume
-created on it. A volume group will either be created or reused it its name begins
-with ``ceph``. No special preparation is needed for these volumes other than
-following the minimum size requirements for data and journal.
+``filestore`` can use a logical volume for OSD data, and it can use a physical
+device, a partition, or a logical volume for the journal. If a physical device
+is used to create a filestore backend, a logical volume will be created on that
+physical device. A volume group will either be created or reused if its name
+begins with ``ceph``. No special preparation is needed for these volumes other
+than making sure to adhere to the minimum size requirements for data and for
+the journal.
-The CLI call looks like this of a basic standalone filestore OSD::
+Use this command to create a basic filestore OSD:
- ceph-volume lvm prepare --filestore --data <data block device>
+.. prompt:: bash #
-To deploy file store with an external journal::
+ ceph-volume lvm prepare --filestore --data <data block device>
- ceph-volume lvm prepare --filestore --data <data block device> --journal <journal block device>
+Use this command to deploy filestore with an external journal:
-For enabling :ref:`encryption <ceph-volume-lvm-encryption>`, the ``--dmcrypt`` flag is required::
+.. prompt:: bash #
+
+ ceph-volume lvm prepare --filestore --data <data block device> --journal <journal block device>
+
+Use this command to enable :ref:`encryption <ceph-volume-lvm-encryption>`, and note that the ``--dmcrypt`` flag is required:
- ceph-volume lvm prepare --filestore --dmcrypt --data <data block device> --journal <journal block device>
+.. prompt:: bash #
-Both the journal and data block device can take three forms:
+ ceph-volume lvm prepare --filestore --dmcrypt --data <data block device> --journal <journal block device>
+
+Both the journal and the data block device can take three forms:
* a physical block device
* a partition on a physical block device
* a logical volume
-When using logical volumes the value *must* be of the format
-``volume_group/logical_volume``. Since logical volume names
-are not enforced for uniqueness, this prevents accidentally
-choosing the wrong volume.
+If you use a logical volume to deploy filestore, the value that you pass in the
+command *must* be of the format ``volume_group/logical_volume_name``. Since logical
+volume names are not enforced for uniqueness, using this format is meant to
+guard against accidentally choosing the wrong volume (and clobbering its data).
+
+If you use a partition to deploy filestore, the partition *must* contain a
+``PARTUUID`` that can be discovered by ``blkid``. This ensures that the
+partition can be identified correctly regardless of the device's name (or path).
-When using a partition, it *must* contain a ``PARTUUID``, that can be
-discovered by ``blkid``. THis ensure it can later be identified correctly
-regardless of the device name (or path).
+For example, to use a logical volume for OSD data and a partition
+(``/dev/sdc1``) for the journal, run a command of this form:
-For example: passing a logical volume for data and a partition ``/dev/sdc1`` for
-the journal::
+.. prompt:: bash #
- ceph-volume lvm prepare --filestore --data volume_group/lv_name --journal /dev/sdc1
+ ceph-volume lvm prepare --filestore --data volume_group/logical_volume_name --journal /dev/sdc1
-Passing a bare device for data and a logical volume ias the journal::
+Or, to use a bare device for data and a logical volume for the journal:
- ceph-volume lvm prepare --filestore --data /dev/sdc --journal volume_group/journal_lv
+.. prompt:: bash #
-A generated uuid is used to ask the cluster for a new OSD. These two pieces are
-crucial for identifying an OSD and will later be used throughout the
-:ref:`ceph-volume-lvm-activate` process.
+ ceph-volume lvm prepare --filestore --data /dev/sdc --journal volume_group/journal_lv
+
+A generated UUID is used when asking the cluster for a new OSD. These two
+pieces of information (the OSD ID and the OSD UUID) are necessary for
+identifying a given OSD and will later be used throughout the
+:ref:`activation<ceph-volume-lvm-activate>` process.
The OSD data directory is created using the following convention::
/var/lib/ceph/osd/<cluster name>-<osd id>
-At this point the data volume is mounted at this location, and the journal
-volume is linked::
+To link the journal volume to the mounted data volume, run this command:
+
+.. prompt:: bash #
+
+ ln -s /path/to/journal /var/lib/ceph/osd/<cluster_name>-<osd-id>/journal
+
+To fetch the monmap by using the bootstrap key from the OSD, run this command:
+
+.. prompt:: bash #
- ln -s /path/to/journal /var/lib/ceph/osd/<cluster_name>-<osd-id>/journal
+ /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
+ /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
+ /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap
-The monmap is fetched using the bootstrap key from the OSD::
+To populate the OSD directory (which has already been mounted), use this ``ceph-osd`` command:
+.. prompt:: bash #
- /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
- --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
- mon getmap -o /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap
+ ceph-osd --cluster ceph --mkfs --mkkey -i <osd id> \ --monmap
+ /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap --osd-data \
+ /var/lib/ceph/osd/<cluster name>-<osd id> --osd-journal
+ /var/lib/ceph/osd/<cluster name>-<osd id>/journal \ --osd-uuid <osd uuid>
+ --keyring /var/lib/ceph/osd/<cluster name>-<osd id>/keyring \ --setuser ceph
+ --setgroup ceph
-``ceph-osd`` will be called to populate the OSD directory, that is already
-mounted, re-using all the pieces of information from the initial steps::
+All of the information from the previous steps is used in the above command.
- ceph-osd --cluster ceph --mkfs --mkkey -i <osd id> \
- --monmap /var/lib/ceph/osd/<cluster name>-<osd id>/activate.monmap --osd-data \
- /var/lib/ceph/osd/<cluster name>-<osd id> --osd-journal /var/lib/ceph/osd/<cluster name>-<osd id>/journal \
- --osd-uuid <osd uuid> --keyring /var/lib/ceph/osd/<cluster name>-<osd id>/keyring \
- --setuser ceph --setgroup ceph
.. _ceph-volume-lvm-partitions: