diff options
author | Sage Weil <sage@newdream.net> | 2012-03-07 00:39:28 +0100 |
---|---|---|
committer | Sage Weil <sage@newdream.net> | 2012-03-07 02:05:29 +0100 |
commit | 75ad8979e7d8539f3b4d7446c0dc18d9183c301d (patch) | |
tree | de23d6f0fd2b4b5015b640458e473b6e38f32f87 /doc | |
parent | mon: list nearfull/full osd detail (diff) | |
download | ceph-75ad8979e7d8539f3b4d7446c0dc18d9183c301d.tar.xz ceph-75ad8979e7d8539f3b4d7446c0dc18d9183c301d.zip |
doc: diagnose full osd cluster
Signed-off-by: Sage Weil <sage@newdream.net>
Diffstat (limited to 'doc')
-rw-r--r-- | doc/ops/manage/failures/osd.rst | 24 |
1 files changed, 24 insertions, 0 deletions
diff --git a/doc/ops/manage/failures/osd.rst b/doc/ops/manage/failures/osd.rst index ddf32392fd2..9e2219af87a 100644 --- a/doc/ops/manage/failures/osd.rst +++ b/doc/ops/manage/failures/osd.rst @@ -26,6 +26,30 @@ restarting, an error message should be present in its log file in ``/var/log/ceph``. +Full cluster +============ + +If the cluster fills up, the monitor will prevent new data from being +written. The system puts ceph-osds in two categories: ``nearfull`` +and ``full`, which configurable threshholds for each (80% and 90% by +default). In both cases, full ceph-osds will be reported by ``ceph health``:: + + $ ceph health + HEALTH_WARN 1 nearfull osds + osd.2 is near full at 85% + +or + + $ ceph health + HEALTH_ERR 1 nearfull osds, 1 full osds + osd.2 is near full at 85% + osd.3 is full at 97% + +The best way to deal with a full cluster is to add new ceph-osds, +allowing the cluster to redistribute data to the newly available +storage. + + Homeless placement groups (PGs) =============================== |