summaryrefslogtreecommitdiffstats
path: root/doc/start
diff options
context:
space:
mode:
authorZac Dover <zac.dover@proton.me>2024-06-09 20:55:13 +0200
committerZac Dover <zac.dover@proton.me>2024-06-09 21:55:48 +0200
commit74cc624d002e51769da37c04b3bdc32e0077d370 (patch)
tree69b5df40f06c1f07f7a238f634646a565b12cac7 /doc/start
parentMerge pull request #57908 from xxhdx1985126/wip-66374 (diff)
downloadceph-74cc624d002e51769da37c04b3bdc32e0077d370.tar.xz
ceph-74cc624d002e51769da37c04b3bdc32e0077d370.zip
doc/start: remove "intro.rst"
Remove "start/intro.rst", which has been renamed "start/index.rst" in order to follow the conventions followed elsewhere in the documentation. Follows https://github.com/ceph/ceph/pull/57900. Signed-off-by: Zac Dover <zac.dover@proton.me>
Diffstat (limited to 'doc/start')
-rw-r--r--doc/start/index.rst4
-rw-r--r--doc/start/intro.rst98
2 files changed, 0 insertions, 102 deletions
diff --git a/doc/start/index.rst b/doc/start/index.rst
index 640fb5d84a8..0aec895ab73 100644
--- a/doc/start/index.rst
+++ b/doc/start/index.rst
@@ -97,7 +97,3 @@ recover dynamically.
get-involved
documenting-ceph
-.. toctree::
- :maxdepth: 2
-
- intro
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
deleted file mode 100644
index 1cbead4a3df..00000000000
--- a/doc/start/intro.rst
+++ /dev/null
@@ -1,98 +0,0 @@
-===============
- Intro to Ceph
-===============
-
-Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud
-Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services
-to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File
-System`. All :term:`Ceph Storage Cluster` deployments begin with setting up
-each :term:`Ceph Node` and then setting up the network.
-
-A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at
-least one Ceph Manager, and at least as many :term:`Ceph Object Storage
-Daemon<Ceph OSD>`\s (OSDs) as there are copies of a given object stored in the
-Ceph cluster (for example, if three copies of a given object are stored in the
-Ceph cluster, then at least three OSDs must exist in that Ceph cluster).
-
-The Ceph Metadata Server is necessary to run Ceph File System clients.
-
-.. note::
-
- It is a best practice to have a Ceph Manager for each Monitor, but it is not
- necessary.
-
-.. ditaa::
-
- +---------------+ +------------+ +------------+ +---------------+
- | OSDs | | Monitors | | Managers | | MDSs |
- +---------------+ +------------+ +------------+ +---------------+
-
-- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the
- cluster state, including the :ref:`monitor map<display-mon-map>`, manager
- map, the OSD map, the MDS map, and the CRUSH map. These maps are critical
- cluster state required for Ceph daemons to coordinate with each other.
- Monitors are also responsible for managing authentication between daemons and
- clients. At least three monitors are normally required for redundancy and
- high availability.
-
-- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
- responsible for keeping track of runtime metrics and the current
- state of the Ceph cluster, including storage utilization, current
- performance metrics, and system load. The Ceph Manager daemons also
- host python-based modules to manage and expose Ceph cluster
- information, including a web-based :ref:`mgr-dashboard` and
- `REST API`_. At least two managers are normally required for high
- availability.
-
-- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
- ``ceph-osd``) stores data, handles data replication, recovery,
- rebalancing, and provides some monitoring information to Ceph
- Monitors and Managers by checking other Ceph OSD Daemons for a
- heartbeat. At least three Ceph OSDs are normally required for
- redundancy and high availability.
-
-- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata
- for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to
- run basic commands (like ``ls``, ``find``, etc.) without placing a burden on
- the Ceph Storage Cluster.
-
-Ceph stores data as objects within logical storage pools. Using the
-:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
-contain the object, and which OSD should store the placement group. The
-CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
-recover dynamically.
-
-.. _REST API: ../../mgr/restful
-
-.. container:: columns-2
-
- .. container:: column
-
- .. raw:: html
-
- <h3>Recommendations</h3>
-
- To begin using Ceph in production, you should review our hardware
- recommendations and operating system recommendations.
-
- .. toctree::
- :maxdepth: 2
-
- Beginner's Guide <beginners-guide>
- Hardware Recommendations <hardware-recommendations>
- OS Recommendations <os-recommendations>
-
- .. container:: column
-
- .. raw:: html
-
- <h3>Get Involved</h3>
-
- You can avail yourself of help or contribute documentation, source
- code or bugs by getting involved in the Ceph community.
-
- .. toctree::
- :maxdepth: 2
-
- get-involved
- documenting-ceph