summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorSage Weil <sage.weil@dreamhost.com>2012-03-26 20:30:20 +0200
committerSage Weil <sage.weil@dreamhost.com>2012-03-26 20:30:20 +0200
commit3bd1f18e594c528df3b6b2060c545e647ee96d21 (patch)
treeeff762a96d79413bf1a11b2f0b92c6faedd799ed /doc
parentdoc/dev/peering.rst: fix typo (diff)
downloadceph-3bd1f18e594c528df3b6b2060c545e647ee96d21.tar.xz
ceph-3bd1f18e594c528df3b6b2060c545e647ee96d21.zip
doc: few notes on manipulating the crush map
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/ops/manage/crush.rst75
-rw-r--r--doc/ops/manage/grow/osd.rst35
2 files changed, 108 insertions, 2 deletions
diff --git a/doc/ops/manage/crush.rst b/doc/ops/manage/crush.rst
new file mode 100644
index 00000000000..eb5b16b442b
--- /dev/null
+++ b/doc/ops/manage/crush.rst
@@ -0,0 +1,75 @@
+=========================
+ Adjusting the CRUSH map
+=========================
+
+.. _adjusting-crush:
+
+There are a few ways to adjust the crush map:
+
+* online, by issuing commands to the monitor
+* offline, by extracting the current map to a file, modifying it, and then reinjecting a new map
+
+For offline changes, some can be made directly with ``crushtool``, and
+others require you to decompile the file to text form, manually edit
+it, and then recompile.
+
+
+Adding a new device (OSD) to the map
+====================================
+
+Adding new devices can be done via the monitor. The general form is::
+
+ $ ceph osd crush add <id> <name> <weight> [<loc> [<lo2> ...]]
+
+where
+
+ * ``id`` is the numeric device id (the OSD id)
+ * ``name`` is an alphanumeric name. By convention Ceph uses
+ ``osd.$id``.
+ * ``weight`` is a floating point weight value controlling how much
+ data the device will be allocated. A decent convention is to make
+ this the number of TB the device will store.
+ * ``loc`` is a list of ``what=where`` pairs indicating where in the
+ CRUSH hierarchy the device will be stored. By default, the
+ hierarchy (the ``what``s) includes ``pool`` (the ``default`` pool
+ is normally the root of the hierarchy), ``rack``, and ``host``.
+ At least one of these location specifiers has to refer to an
+ existing point in the hierarchy, and only the lowest (most
+ specific) match counts. Beneath that point, any intervening
+ branches will be created as needed. Specifying the complete
+ location is always sufficient, and also safe in that existing
+ branches (and devices) won't be moved around.
+
+For example, if the new OSD id is ``123``, we want a weight of ``1.0``
+and the new device is on host ``hostfoo`` and rack ``rackbar``::
+
+ $ ceph osd crush add 123 osd.123 1.0 pool=default rack=rackbar host=hostfoo
+
+will add it to the hierarchy. The rack ``rackbar`` and host
+``hostfoo`` will be added as needed, as long as the pool ``default``
+exists (as it does in the default Ceph CRUSH map generated during
+cluster creation).
+
+Note that if I later add another device in the same host but specify a
+different pool or rack::
+
+ $ ceph osd crush add 124 osd.124 1.0 pool=nondefault rack=weirdrack host=hostfoo
+
+the device will still be placed in host ``hostfoo`` at its current
+location (rack ``rackbar`` and pool ``default``).
+
+
+Adjusting the CRUSH weight
+==========================
+
+You can adjust the CRUSH weight for a device with::
+
+ $ ceph osd crush reweight osd.123 2.0
+
+Removing a device
+=================
+
+You can remove a device from the crush map with::
+
+ $ ceph osd crush remove osd.123
+
diff --git a/doc/ops/manage/grow/osd.rst b/doc/ops/manage/grow/osd.rst
index 99312a3a229..d64f789fa80 100644
--- a/doc/ops/manage/grow/osd.rst
+++ b/doc/ops/manage/grow/osd.rst
@@ -2,11 +2,42 @@
Resizing the RADOS cluster
============================
-Adding new OSDs
-===============
+Adding a new OSD to the cluster
+===============================
+
+Briefly...
+
+#. Allocate a new OSD id::
+
+ $ ceph osd create
+ 123
+
+#. Make sure ceph.conf is valid for the new OSD.
+
+#. Initialize osd data directory::
+
+ $ ceph-osd -i 123 --mkfs --mkkey
+
+#. Register the OSD authentication key::
+
+ $ ceph auth add osd.123 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd-data/123/keyring
+
+#. Adjust the CRUSH map to allocate data to the new device (see :ref:`adjusting-crush`).
Removing OSDs
=============
+Briefly...
+
+#. Stop the daemon
+
+#. Remove it from the CRUSH map::
+
+ $ ceph osd crush remove osd.123
+
+#. Remove it from the osd map::
+
+ $ ceph osd rm 123
+
See also :ref:`failures-osd`.