diff options
Diffstat (limited to 'doc/cephadm/services/osd.rst')
-rw-r--r-- | doc/cephadm/services/osd.rst | 58 |
1 files changed, 38 insertions, 20 deletions
diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index 9c0b4d2b495..90ebd86f897 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -198,6 +198,18 @@ There are a few ways to create new OSDs: .. warning:: When deploying new OSDs with ``cephadm``, ensure that the ``ceph-osd`` package is not already installed on the target host. If it is installed, conflicts may arise in the management and control of the OSD that may lead to errors or unexpected behavior. +* OSDs created via ``ceph orch daemon add`` are by default not added to the orchestrator's OSD service, they get added to 'osd' service. To attach an OSD to a different, existing OSD service, issue a command of the following form: + + .. prompt:: bash * + + ceph orch osd set-spec-affinity <service_name> <osd_id(s)> + + For example: + + .. prompt:: bash # + + ceph orch osd set-spec-affinity osd.default_drive_group 0 1 + Dry Run ------- @@ -478,22 +490,27 @@ for that OSD and also set a specific memory target. For example, Advanced OSD Service Specifications =================================== -:ref:`orchestrator-cli-service-spec`\s of type ``osd`` are a way to describe a -cluster layout, using the properties of disks. Service specifications give the -user an abstract way to tell Ceph which disks should turn into OSDs with which -configurations, without knowing the specifics of device names and paths. +:ref:`orchestrator-cli-service-spec`\s of type ``osd`` provide a way to use the +properties of disks to describe a Ceph cluster's layout. Service specifications +are an abstraction used to tell Ceph which disks it should transform into OSDs +and which configurations to apply to those OSDs. +:ref:`orchestrator-cli-service-spec`\s make it possible to target these disks +for transformation into OSDs even when the Ceph cluster operator does not know +the specific device names and paths associated with those disks. -Service specifications make it possible to define a yaml or json file that can -be used to reduce the amount of manual work involved in creating OSDs. +:ref:`orchestrator-cli-service-spec`\s make it possible to define a ``.yaml`` +or ``.json`` file that can be used to reduce the amount of manual work involved +in creating OSDs. .. note:: - It is recommended that advanced OSD specs include the ``service_id`` field - set. The plain ``osd`` service with no service id is where OSDs created - using ``ceph orch daemon add`` or ``ceph orch apply osd --all-available-devices`` - are placed. Not including a ``service_id`` in your OSD spec would mix - the OSDs from your spec with those OSDs and potentially overwrite services - specs created by cephadm to track them. Newer versions of cephadm will even - block creation of advanced OSD specs without the service_id present + We recommend that advanced OSD specs include the ``service_id`` field set. + OSDs created using ``ceph orch daemon add`` or ``ceph orch apply osd + --all-available-devices`` are placed in the plain ``osd`` service. Failing + to include a ``service_id`` in your OSD spec causes the Ceph cluster to mix + the OSDs from your spec with those OSDs, which can potentially result in the + overwriting of service specs created by ``cephadm`` to track them. Newer + versions of ``cephadm`` will even block creation of advanced OSD specs that + do not include the ``service_id``. For example, instead of running the following command: @@ -501,8 +518,8 @@ For example, instead of running the following command: ceph orch daemon add osd *<host>*:*<path-to-device>* -for each device and each host, we can define a yaml or json file that allows us -to describe the layout. Here's the most basic example. +for each device and each host, we can define a ``.yaml`` or ``.json`` file that +allows us to describe the layout. Here is the most basic example: Create a file called (for example) ``osd_spec.yml``: @@ -520,17 +537,18 @@ This means : #. Turn any available device (ceph-volume decides what 'available' is) into an OSD on all hosts that match the glob pattern '*'. (The glob pattern matches - against the registered hosts from `host ls`) A more detailed section on - host_pattern is available below. + against the registered hosts from `ceph orch host ls`) See + :ref:`cephadm-services-placement-by-pattern-matching` for more on using + ``host_pattern``-matching to turn devices into OSDs. -#. Then pass it to `osd create` like this: +#. Pass ``osd_spec.yml`` to ``osd create`` by using the following command: .. prompt:: bash [monitor.1]# ceph orch apply -i /path/to/osd_spec.yml - This instruction will be issued to all the matching hosts, and will deploy - these OSDs. + This instruction is issued to all the matching hosts, and will deploy these + OSDs. Setups more complex than the one specified by the ``all`` filter are possible. See :ref:`osd_filters` for details. |