summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/_ext/ceph_commands.py2
-rw-r--r--doc/conf.py18
-rw-r--r--doc/dev/crimson/crimson.rst173
-rw-r--r--doc/rados/operations/health-checks.rst8
4 files changed, 114 insertions, 87 deletions
diff --git a/doc/_ext/ceph_commands.py b/doc/_ext/ceph_commands.py
index 0697c71f0e1..d96eab08853 100644
--- a/doc/_ext/ceph_commands.py
+++ b/doc/_ext/ceph_commands.py
@@ -94,7 +94,7 @@ class CmdParam(object):
self.goodchars = goodchars
self.positional = positional != 'false'
- assert who == None
+ assert who is None
def help(self):
advanced = []
diff --git a/doc/conf.py b/doc/conf.py
index 4fdc9a53b75..5293ff1b212 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -76,7 +76,7 @@ html_show_sphinx = False
html_static_path = ["_static"]
html_sidebars = {
'**': ['smarttoc.html', 'searchbox.html']
- }
+}
html_css_files = ['css/custom.css']
@@ -133,13 +133,23 @@ extensions = [
'sphinxcontrib.mermaid',
'sphinxcontrib.openapi',
'sphinxcontrib.seqdiag',
- ]
+]
ditaa = shutil.which("ditaa")
if ditaa is not None:
# in case we don't have binfmt_misc enabled or jar is not registered
- ditaa_args = ['-jar', ditaa]
- ditaa = 'java'
+ _jar_paths = [
+ '/usr/share/ditaa/lib/ditaa.jar', # Gentoo
+ '/usr/share/ditaa/ditaa.jar', # deb
+ '/usr/share/java/ditaa.jar', # rpm
+ ]
+ _jar_paths = [p for p in _jar_paths if os.path.exists(p)]
+ if _jar_paths:
+ ditaa = 'java'
+ ditaa_args = ['-jar', _jar_paths[0]]
+ else:
+ # keep ditaa from shutil.which
+ ditaa_args = []
extensions += ['sphinxcontrib.ditaa']
else:
extensions += ['plantweb.directive']
diff --git a/doc/dev/crimson/crimson.rst b/doc/dev/crimson/crimson.rst
index f9582ec6c84..f6d59a057ff 100644
--- a/doc/dev/crimson/crimson.rst
+++ b/doc/dev/crimson/crimson.rst
@@ -43,6 +43,82 @@ use a Crimson build:
You'll likely need to supply the ``--allow-mismatched-release`` flag to
use a non-release branch.
+Configure Crimson with Bluestore
+================================
+
+As Bluestore is not a Crimson native `object store backend`_,
+deploying Crimson with Bluestore as the back end requires setting
+one of the two following configuration options:
+
+.. note::
+
+ #. These two options, along with ``crimson_alien_op_num_threads``,
+ can't be changed after deployment.
+ #. `vstart.sh`_ sets these options using the ``--crimson-smp`` flag.
+
+
+1) ``crimson_seastar_num_threads``
+
+ In order to allow easier cluster deployments, this option can be used
+ instead of setting the CPU mask manually for each OSD.
+
+ It's recommended to let the **number of OSDs on each host** multiplied by
+ ``crimson_seastar_num_threads`` to be less than the node's number of CPU
+ cores (``nproc``).
+
+ For example, for deploying two nodes with eight CPU cores and two OSDs each:
+
+ .. code-block:: yaml
+
+ conf:
+ # Global to all OSDs
+ osd:
+ crimson seastar num threads: 3
+
+ .. note::
+
+ #. For optimal performance ``crimson_seastar_cpu_cores`` should be set instead.
+
+2) ``crimson_seastar_cpu_cores`` and ``crimson_alien_thread_cpu_cores``.
+
+ Explicitly set the CPU core allocation for each ``crimson-osd``
+ and for the BlueStore back end. It's recommended for each set to be mutually exclusive.
+
+ For example, for deploying two nodes with eight CPU cores and two OSDs each:
+
+ .. code-block:: yaml
+
+ conf:
+ # Both nodes
+ osd:
+ crimson alien thread cpu cores: 6-7
+
+ # First node
+ osd.0:
+ crimson seastar cpu cores: 0-2
+ osd.1:
+ crimson seastar cpu cores: 3-5
+
+ # Second node
+ osd.2:
+ crimson seastar cpu cores: 0-2
+ osd.3:
+ crimson seastar cpu cores: 3-5
+
+ For a single node with eight node and three OSDs:
+
+ .. code-block:: yaml
+
+ conf:
+ osd:
+ crimson alien thread cpu cores: 6-7
+ osd.0:
+ crimson seastar cpu cores: 0-1
+ osd.1:
+ crimson seastar cpu cores: 2-3
+ osd.2:
+ crimson seastar cpu cores: 4-5
+
Running Crimson
===============
@@ -106,7 +182,7 @@ The following options can be used with ``vstart.sh``.
(as determined by `nproc`) will be assigned to the object store.
``--bluestore``
- Use alienized BlueStore as the object store backend.
+ Use the alienized BlueStore as the object store backend. This is the default (see below section on the `object store backend`_ for more details)
``--cyanstore``
Use CyanStore as the object store backend.
@@ -115,7 +191,7 @@ The following options can be used with ``vstart.sh``.
Use the alienized MemStore as the object store backend.
``--seastore``
- Use SeaStore as the back end object store. This is the default (see below section on the `object store backend`_ for more details)
+ Use SeaStore as the back end object store.
``--seastore-devs``
Specify the block device used by SeaStore.
@@ -131,11 +207,20 @@ The following options can be used with ``vstart.sh``.
Valid types include ``HDD``, ``SSD``(default), ``ZNS``, and ``RANDOM_BLOCK_SSD``
Note secondary devices should not be faster than the main device.
+To start a cluster with a single Crimson node, run::
+
+ $ MGR=1 MON=1 OSD=1 MDS=0 RGW=0 ../src/vstart.sh \
+ --without-dashboard --bluestore --crimson \
+ --redirect-output
-To start a simple cluster with a single core Crimson OSD, run::
+Another SeaStore example::
- $ MGR=1 MON=1 OSD=1 MDS=0 RGW=0 ../src/vstart.sh -n \
- --without-dashboard --seastore --crimson
+ $ MGR=1 MON=1 OSD=1 MDS=0 RGW=0 ../src/vstart.sh -n -x \
+ --without-dashboard --seastore \
+ --crimson --redirect-output \
+ --seastore-devs /dev/sda \
+ --seastore-secondary-devs /dev/sdb \
+ --seastore-secondary-devs-type HDD
Stop this ``vstart`` cluster by running::
@@ -154,7 +239,7 @@ They are:
.. describe:: seastore
- Seastore is the default Crimson backend and is still under active development.
+ Seastore is still under active development.
The alienized object store backends are backed by a thread pool, which
is a proxy of the alienstore adaptor running in Seastar. The proxy issues
@@ -169,82 +254,6 @@ managed by the Seastar framework. They are:
The object store used by the classic ``ceph-osd``
-Configure Crimson with Bluestore
-================================
-
-As Bluestore is not a Crimson native `object store backend`_,
-deploying Crimson with Bluestore as the back end requires setting
-one of the two following configuration options:
-
-.. note::
-
- #. These two options, along with ``crimson_alien_op_num_threads``,
- can't be changed after deployment.
- #. `vstart.sh`_ sets these options using the ``--crimson-smp`` flag.
-
-
-1) ``crimson_seastar_num_threads``
-
- In order to allow easier cluster deployments, this option can be used
- instead of setting the CPU mask manually for each OSD.
-
- It's recommended to set the **number of OSDs on each host** multiplied by
- ``crimson_seastar_num_threads`` to be less than the node's number of CPU
- cores (``nproc``).
-
- For example, for deploying two nodes with eight CPU cores and two OSDs each:
-
- .. code-block:: yaml
-
- conf:
- # Global to all OSDs
- osd:
- crimson seastar num threads: 3
-
- .. note::
-
- #. For optimal performance ``crimson_seastar_cpu_cores`` should be set instead.
-
-2) ``crimson_seastar_cpu_cores`` and ``crimson_alien_thread_cpu_cores``.
-
- Explicitly set the CPU core allocation for each ``crimson-osd``
- and for the BlueStore back end. It's recommended for each set to be mutually exclusive.
-
- For example, for deploying two nodes with eight CPU cores and two OSDs each:
-
- .. code-block:: yaml
-
- conf:
- # Both nodes
- osd:
- crimson alien thread cpu cores: 6-7
-
- # First node
- osd.0:
- crimson seastar cpu cores: 0-2
- osd.1:
- crimson seastar cpu cores: 3-5
-
- # Second node
- osd.2:
- crimson seastar cpu cores: 0-2
- osd.3:
- crimson seastar cpu cores: 3-5
-
- For a single node with eight node and three OSDs:
-
- .. code-block:: yaml
-
- conf:
- osd:
- crimson alien thread cpu cores: 6-7
- osd.0:
- crimson seastar cpu cores: 0-1
- osd.1:
- crimson seastar cpu cores: 2-3
- osd.2:
- crimson seastar cpu cores: 4-5
-
daemonize
---------
diff --git a/doc/rados/operations/health-checks.rst b/doc/rados/operations/health-checks.rst
index f5d38948150..a1498a09fd0 100644
--- a/doc/rados/operations/health-checks.rst
+++ b/doc/rados/operations/health-checks.rst
@@ -1665,6 +1665,14 @@ Some of the gateways are in the GW_UNAVAILABLE state. If a NVMeoF daemon has
crashed, the daemon log file (found at ``/var/log/ceph/``) may contain
troubleshooting information.
+NVMEOF_GATEWAY_DELETING
+_______________________
+
+Some of the gateways are in the GW_DELETING state. They will stay in this
+state until all the namespaces under the gateway's load balancing group are
+moved to another load balancing group ID. This is done automatically by the
+load balancing process. If this alert persist for a long time, there might
+be an issue with that process.
Miscellaneous
-------------