summaryrefslogtreecommitdiffstats
path: root/doc/architecture.rst
diff options
context:
space:
mode:
authorSage Weil <sage@newdream.net>2011-09-22 01:28:43 +0200
committerSage Weil <sage@newdream.net>2011-09-23 00:08:25 +0200
commit6f8f140350c91f4d823f7c7cc856d78abb57b22c (patch)
tree6349d318688a41254a633bd233a97ff5f8a693ca /doc/architecture.rst
parentmount.ceph: use ":/" instead of just ":" as ip/path delimiter (diff)
downloadceph-6f8f140350c91f4d823f7c7cc856d78abb57b22c.tar.xz
ceph-6f8f140350c91f4d823f7c7cc856d78abb57b22c.zip
c* -> ceph-*
Hopefully I didn't miss too much... Signed-off-by: Sage Weil <sage@newdream.net>
Diffstat (limited to 'doc/architecture.rst')
-rw-r--r--doc/architecture.rst42
1 files changed, 21 insertions, 21 deletions
diff --git a/doc/architecture.rst b/doc/architecture.rst
index c9705b4bdd8..e8f8b6d874d 100644
--- a/doc/architecture.rst
+++ b/doc/architecture.rst
@@ -21,30 +21,30 @@ overhead.
Monitor cluster
===============
-``cmon`` is a lightweight daemon that provides a consensus for
+``ceph-mon`` is a lightweight daemon that provides a consensus for
distributed decisionmaking in a Ceph/RADOS cluster.
It also is the initial point of contact for new clients, and will hand
out information about the topology of the cluster, such as the
``osdmap``.
-You normally run 3 ``cmon`` daemons, on 3 separate physical machines,
+You normally run 3 ``ceph-mon`` daemons, on 3 separate physical machines,
isolated from each other; for example, in different racks or rows.
You could run just 1 instance, but that means giving up on high
availability.
-You may use the same hosts for ``cmon`` and other purposes.
+You may use the same hosts for ``ceph-mon`` and other purposes.
-``cmon`` processes talk to each other using a Paxos_\-style
+``ceph-mon`` processes talk to each other using a Paxos_\-style
protocol. They discover each other via the ``[mon.X] mon addr`` fields
in ``ceph.conf``.
.. todo:: What about ``monmap``? Fact check.
-Any decision requires the majority of the ``cmon`` processes to be
+Any decision requires the majority of the ``ceph-mon`` processes to be
healthy and communicating with each other. For this reason, you never
-want an even number of ``cmon``\s; there is no unambiguous majority
+want an even number of ``ceph-mon``\s; there is no unambiguous majority
subgroup for an even number.
.. _Paxos: http://en.wikipedia.org/wiki/Paxos_algorithm
@@ -58,9 +58,9 @@ subgroup for an even number.
RADOS
=====
-``cosd`` is the storage daemon that provides the RADOS service. It
-uses ``cmon`` for cluster membership, services object read/write/etc
-request from clients, and peers with other ``cosd``\s for data
+``ceph-osd`` is the storage daemon that provides the RADOS service. It
+uses ``ceph-mon`` for cluster membership, services object read/write/etc
+request from clients, and peers with other ``ceph-osd``\s for data
replication.
The data model is fairly simple on this level. There are multiple
@@ -77,7 +77,7 @@ metadata to store file owner etc.
.. todo:: Verify that metadata is unordered.
-Underneath, ``cosd`` stores the data on a local filesystem. We
+Underneath, ``ceph-osd`` stores the data on a local filesystem. We
recommend using Btrfs_, but any POSIX filesystem that has extended
attributes should work (see :ref:`xattr`).
@@ -96,37 +96,37 @@ Ceph filesystem
===============
The Ceph filesystem service is provided by a daemon called
-``cmds``. It uses RADOS to store all the filesystem metadata
+``ceph-mds``. It uses RADOS to store all the filesystem metadata
(directories, file ownership, access modes, etc), and directs clients
to access RADOS directly for the file contents.
The Ceph filesystem aims for POSIX compatibility, except for a few
chosen differences. See :doc:`/appendix/differences-from-posix`.
-``cmds`` can run as a single process, or it can be distributed out to
+``ceph-mds`` can run as a single process, or it can be distributed out to
multiple physical machines, either for high availability or for
scalability.
-For high availability, the extra ``cmds`` instances can be `standby`,
-ready to take over the duties of any failed ``cmds`` that was
+For high availability, the extra ``ceph-mds`` instances can be `standby`,
+ready to take over the duties of any failed ``ceph-mds`` that was
`active`. This is easy because all the data, including the journal, is
stored on RADOS. The transition is triggered automatically by
-``cmon``.
+``ceph-mon``.
-For scalability, multiple ``cmds`` instances can be `active`, and they
+For scalability, multiple ``ceph-mds`` instances can be `active`, and they
will split the directory tree into subtrees (and shards of a single
busy directory), effectively balancing the load amongst all `active`
servers.
Combinations of `standby` and `active` etc are possible, for example
-running 3 `active` ``cmds`` instances for scaling, and one `standby`.
+running 3 `active` ``ceph-mds`` instances for scaling, and one `standby`.
-To control the number of `active` ``cmds``\es, see
+To control the number of `active` ``ceph-mds``\es, see
:doc:`/ops/manage/grow/mds`.
.. topic:: Status as of 2011-09:
- Multiple `active` ``cmds`` operation is stable under normal
+ Multiple `active` ``ceph-mds`` operation is stable under normal
circumstances, but some failure scenarios may still cause
operational issues.
@@ -166,14 +166,14 @@ virtualization. This is done with the command-line tool ``rbd`` (see
The latter is also useful in non-virtualized scenarios.
Internally, RBD stripes the device image over multiple RADOS objects,
-each typically located on a separate ``cosd``, allowing it to perform
+each typically located on a separate ``ceph-osd``, allowing it to perform
better than a single server could.
Client
======
-.. todo:: cephfs, cfuse, librados, libceph, librbd
+.. todo:: cephfs, ceph-fuse, librados, libceph, librbd
.. todo:: Summarize how much Ceph trusts the client, for what parts (security vs reliability).