summaryrefslogtreecommitdiffstats
path: root/doc/rados
diff options
context:
space:
mode:
authorKefu Chai <kchai@redhat.com>2020-04-09 15:25:39 +0200
committerKefu Chai <kchai@redhat.com>2020-04-10 02:38:06 +0200
commit0cb56e0f13dc57167271ec7f20f11421416196a2 (patch)
tree951ab4237e3d89268f6d74c125f28cc6d5928061 /doc/rados
parentdoc/conf.py: exclude pybindings docs from build for RTD (diff)
downloadceph-0cb56e0f13dc57167271ec7f20f11421416196a2.tar.xz
ceph-0cb56e0f13dc57167271ec7f20f11421416196a2.zip
doc: use plantweb as fallback of sphinx-ditaa
RTD does not support installing system packages, the only ways to install dependencies are setuptools and pip. while ditaa is a tool written in Java. so we need to find a native python tool allowing us to render ditaa images. plantweb is able to the web service for rendering the ditaa diagram. so let's use it as a fallback if "ditaa" is not around. also start a new line after the directive, otherwise planweb server will return 500 at seeing the diagram. Signed-off-by: Kefu Chai <kchai@redhat.com>
Diffstat (limited to 'doc/rados')
-rw-r--r--doc/rados/api/librados-intro.rst10
-rw-r--r--doc/rados/configuration/mon-config-ref.rst7
-rw-r--r--doc/rados/configuration/mon-osd-interaction.rst15
-rw-r--r--doc/rados/configuration/network-config-ref.rst2
-rw-r--r--doc/rados/operations/cache-tiering.rst2
-rw-r--r--doc/rados/operations/monitoring-osd-pg.rst12
-rw-r--r--doc/rados/operations/placement-groups.rst1
-rw-r--r--doc/rados/operations/user-management.rst3
8 files changed, 32 insertions, 20 deletions
diff --git a/doc/rados/api/librados-intro.rst b/doc/rados/api/librados-intro.rst
index c63a255897c..7179438a84d 100644
--- a/doc/rados/api/librados-intro.rst
+++ b/doc/rados/api/librados-intro.rst
@@ -15,7 +15,7 @@ the Ceph Storage Cluster:
- The :term:`Ceph Monitor`, which maintains a master copy of the cluster map.
- The :term:`Ceph OSD Daemon` (OSD), which stores data as objects on a storage node.
-.. ditaa::
+.. ditaa::
+---------------------------------+
| Ceph Storage Cluster Protocol |
| (librados) |
@@ -165,7 +165,7 @@ placement group and `OSD`_ for locating the data. Then the client application
can read or write data. The client app doesn't need to learn about the topology
of the cluster directly.
-.. ditaa::
+.. ditaa::
+--------+ Retrieves +---------------+
| Client |------------>| Cluster Map |
+--------+ +---------------+
@@ -217,7 +217,8 @@ these capabilities. The following diagram provides a high-level flow for the
initial connection.
-.. ditaa:: +---------+ +---------+
+.. ditaa::
+ +---------+ +---------+
| Client | | Monitor |
+---------+ +---------+
| |
@@ -521,7 +522,8 @@ functionality includes:
- Snapshot pools, list snapshots, etc.
-.. ditaa:: +---------+ +---------+ +---------+
+.. ditaa::
+ +---------+ +---------+ +---------+
| Client | | Monitor | | OSD |
+---------+ +---------+ +---------+
| | |
diff --git a/doc/rados/configuration/mon-config-ref.rst b/doc/rados/configuration/mon-config-ref.rst
index dbfc20b9084..e93cd28b7d7 100644
--- a/doc/rados/configuration/mon-config-ref.rst
+++ b/doc/rados/configuration/mon-config-ref.rst
@@ -34,8 +34,7 @@ Monitors can query the most recent version of the cluster map during sync
operations. Ceph Monitors leverage the key/value store's snapshots and iterators
(using leveldb) to perform store-wide synchronization.
-.. ditaa::
-
+.. ditaa::
/-------------\ /-------------\
| Monitor | Write Changes | Paxos |
| cCCC +-------------->+ cCCC |
@@ -505,7 +504,6 @@ Ceph Clients to read and write data. So the Ceph Storage Cluster's operating
capacity is 95TB, not 99TB.
.. ditaa::
-
+--------+ +--------+ +--------+ +--------+ +--------+ +--------+
| Rack 1 | | Rack 2 | | Rack 3 | | Rack 4 | | Rack 5 | | Rack 6 |
| cCCC | | cF00 | | cCCC | | cCCC | | cCCC | | cCCC |
@@ -636,7 +634,8 @@ fallen behind the other monitors. The requester asks the leader to synchronize,
and the leader tells the requester to synchronize with a provider.
-.. ditaa:: +-----------+ +---------+ +----------+
+.. ditaa::
+ +-----------+ +---------+ +----------+
| Requester | | Leader | | Provider |
+-----------+ +---------+ +----------+
| | |
diff --git a/doc/rados/configuration/mon-osd-interaction.rst b/doc/rados/configuration/mon-osd-interaction.rst
index a7324ebb0e5..6ef66265553 100644
--- a/doc/rados/configuration/mon-osd-interaction.rst
+++ b/doc/rados/configuration/mon-osd-interaction.rst
@@ -34,7 +34,8 @@ and ``[osd]`` or ``[global]`` section of your Ceph configuration file,
or by setting the value at runtime.
-.. ditaa:: +---------+ +---------+
+.. ditaa::
+ +---------+ +---------+
| OSD 1 | | OSD 2 |
+---------+ +---------+
| |
@@ -89,7 +90,9 @@ and ``mon osd reporter subtree level`` settings under the ``[mon]`` section of
your Ceph configuration file, or by setting the value at runtime.
-.. ditaa:: +---------+ +---------+ +---------+
+.. ditaa::
+
+ +---------+ +---------+ +---------+
| OSD 1 | | OSD 2 | | Monitor |
+---------+ +---------+ +---------+
| | |
@@ -118,7 +121,9 @@ Ceph Monitor heartbeat interval by adding an ``osd mon heartbeat interval``
setting under the ``[osd]`` section of your Ceph configuration file, or by
setting the value at runtime.
-.. ditaa:: +---------+ +---------+ +-------+ +---------+
+.. ditaa::
+
+ +---------+ +---------+ +-------+ +---------+
| OSD 1 | | OSD 2 | | OSD 3 | | Monitor |
+---------+ +---------+ +-------+ +---------+
| | | |
@@ -161,7 +166,9 @@ interval max`` setting under the ``[osd]`` section of your Ceph configuration
file, or by setting the value at runtime.
-.. ditaa:: +---------+ +---------+
+.. ditaa::
+
+ +---------+ +---------+
| OSD 1 | | Monitor |
+---------+ +---------+
| |
diff --git a/doc/rados/configuration/network-config-ref.rst b/doc/rados/configuration/network-config-ref.rst
index 41da0a17528..bd49a87b310 100644
--- a/doc/rados/configuration/network-config-ref.rst
+++ b/doc/rados/configuration/network-config-ref.rst
@@ -112,7 +112,7 @@ Each Ceph OSD Daemon on a Ceph Node may use up to four ports:
#. One for sending data to other OSDs.
#. Two for heartbeating on each interface.
-.. ditaa::
+.. ditaa::
/---------------\
| OSD |
| +---+----------------+-----------+
diff --git a/doc/rados/operations/cache-tiering.rst b/doc/rados/operations/cache-tiering.rst
index c825c22c303..237b6e3c9f3 100644
--- a/doc/rados/operations/cache-tiering.rst
+++ b/doc/rados/operations/cache-tiering.rst
@@ -13,7 +13,7 @@ tier. So the cache tier and the backing storage tier are completely transparent
to Ceph clients.
-.. ditaa::
+.. ditaa::
+-------------+
| Ceph Client |
+------+------+
diff --git a/doc/rados/operations/monitoring-osd-pg.rst b/doc/rados/operations/monitoring-osd-pg.rst
index 630d268b458..08b70dd4d51 100644
--- a/doc/rados/operations/monitoring-osd-pg.rst
+++ b/doc/rados/operations/monitoring-osd-pg.rst
@@ -33,7 +33,9 @@ not assign placement groups to the OSD. If an OSD is ``down``, it should also be
.. note:: If an OSD is ``down`` and ``in``, there is a problem and the cluster
will not be in a healthy state.
-.. ditaa:: +----------------+ +----------------+
+.. ditaa::
+
+ +----------------+ +----------------+
| | | |
| OSD #n In | | OSD #n Up |
| | | |
@@ -158,7 +160,9 @@ OSDs to establish agreement on the current state of the placement group
(assuming a pool with 3 replicas of the PG).
-.. ditaa:: +---------+ +---------+ +-------+
+.. ditaa::
+
+ +---------+ +---------+ +-------+
| OSD 1 | | OSD 2 | | OSD 3 |
+---------+ +---------+ +-------+
| | |
@@ -265,8 +269,8 @@ group's Acting Set will peer. Once peering is complete, the placement group
status should be ``active+clean``, which means a Ceph client can begin writing
to the placement group.
-.. ditaa::
-
+.. ditaa::
+
/-----------\ /-----------\ /-----------\
| Creating |------>| Peering |------>| Active |
\-----------/ \-----------/ \-----------/
diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst
index dbeb2fe30c7..f7f2d110a8e 100644
--- a/doc/rados/operations/placement-groups.rst
+++ b/doc/rados/operations/placement-groups.rst
@@ -244,7 +244,6 @@ OSDs. For instance, in a replicated pool of size two, each placement
group will store objects on two OSDs, as shown below.
.. ditaa::
-
+-----------------------+ +-----------------------+
| Placement Group #1 | | Placement Group #2 |
| | | |
diff --git a/doc/rados/operations/user-management.rst b/doc/rados/operations/user-management.rst
index 4d961d4be04..7b7713a83bd 100644
--- a/doc/rados/operations/user-management.rst
+++ b/doc/rados/operations/user-management.rst
@@ -9,7 +9,8 @@ authorization with the :term:`Ceph Storage Cluster`. Users are either
individuals or system actors such as applications, which use Ceph clients to
interact with the Ceph Storage Cluster daemons.
-.. ditaa:: +-----+
+.. ditaa::
+ +-----+
| {o} |
| |
+--+--+ /---------\ /---------\