summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorZac Dover <zac.dover@gmail.com>2022-11-29 13:28:36 +0100
committerZac Dover <zac.dover@gmail.com>2022-11-29 13:28:36 +0100
commitf51fc235ce73eb8253c8e3713b58b82013caf934 (patch)
tree762ddedf9b262a778efbcff754ed1ec820457ba7 /doc
parentMerge pull request #49107 from zdover23/wip-doc-2022-11-29-rados-balancer-pro... (diff)
downloadceph-f51fc235ce73eb8253c8e3713b58b82013caf934.tar.xz
ceph-f51fc235ce73eb8253c8e3713b58b82013caf934.zip
doc/rados: add prompts to cache-tiering.rst
Add unselectable prompts to doc/rados/operations/cache-tiering.rst. https://tracker.ceph.com/issues/57108 Signed-off-by: Zac Dover <zac.dover@gmail.com>
Diffstat (limited to 'doc')
-rw-r--r--doc/rados/operations/cache-tiering.rst234
1 files changed, 153 insertions, 81 deletions
diff --git a/doc/rados/operations/cache-tiering.rst b/doc/rados/operations/cache-tiering.rst
index a9e7984d45d..8056ace4743 100644
--- a/doc/rados/operations/cache-tiering.rst
+++ b/doc/rados/operations/cache-tiering.rst
@@ -205,62 +205,82 @@ Creating a Cache Tier
=====================
Setting up a cache tier involves associating a backing storage pool with
-a cache pool ::
+a cache pool:
- ceph osd tier add {storagepool} {cachepool}
+.. prompt:: bash $
-For example ::
+ ceph osd tier add {storagepool} {cachepool}
- ceph osd tier add cold-storage hot-storage
+For example:
-To set the cache mode, execute the following::
+.. prompt:: bash $
- ceph osd tier cache-mode {cachepool} {cache-mode}
+ ceph osd tier add cold-storage hot-storage
-For example::
+To set the cache mode, execute the following:
- ceph osd tier cache-mode hot-storage writeback
+.. prompt:: bash $
+
+ ceph osd tier cache-mode {cachepool} {cache-mode}
+
+For example:
+
+.. prompt:: bash $
+
+ ceph osd tier cache-mode hot-storage writeback
The cache tiers overlay the backing storage tier, so they require one
additional step: you must direct all client traffic from the storage pool to
the cache pool. To direct client traffic directly to the cache pool, execute
-the following::
+the following:
+
+.. prompt:: bash $
- ceph osd tier set-overlay {storagepool} {cachepool}
+ ceph osd tier set-overlay {storagepool} {cachepool}
-For example::
+For example:
- ceph osd tier set-overlay cold-storage hot-storage
+.. prompt:: bash $
+
+ ceph osd tier set-overlay cold-storage hot-storage
Configuring a Cache Tier
========================
Cache tiers have several configuration options. You may set
-cache tier configuration options with the following usage::
+cache tier configuration options with the following usage:
- ceph osd pool set {cachepool} {key} {value}
+.. prompt:: bash $
+ ceph osd pool set {cachepool} {key} {value}
+
See `Pools - Set Pool Values`_ for details.
Target Size and Type
--------------------
-Ceph's production cache tiers use a `Bloom Filter`_ for the ``hit_set_type``::
+Ceph's production cache tiers use a `Bloom Filter`_ for the ``hit_set_type``:
+
+.. prompt:: bash $
- ceph osd pool set {cachepool} hit_set_type bloom
+ ceph osd pool set {cachepool} hit_set_type bloom
-For example::
+For example:
- ceph osd pool set hot-storage hit_set_type bloom
+.. prompt:: bash $
+
+ ceph osd pool set hot-storage hit_set_type bloom
The ``hit_set_count`` and ``hit_set_period`` define how many such HitSets to
-store, and how much time each HitSet should cover. ::
+store, and how much time each HitSet should cover:
+
+.. prompt:: bash $
- ceph osd pool set {cachepool} hit_set_count 12
- ceph osd pool set {cachepool} hit_set_period 14400
- ceph osd pool set {cachepool} target_max_bytes 1000000000000
+ ceph osd pool set {cachepool} hit_set_count 12
+ ceph osd pool set {cachepool} hit_set_period 14400
+ ceph osd pool set {cachepool} target_max_bytes 1000000000000
.. note:: A larger ``hit_set_count`` results in more RAM consumed by
the ``ceph-osd`` process.
@@ -279,10 +299,12 @@ number of archive HitSets are checked. The object is promoted if the object is
found in any of the most recent ``min_read_recency_for_promote`` HitSets.
A similar parameter can be set for the write operation, which is
-``min_write_recency_for_promote``. ::
+``min_write_recency_for_promote``:
- ceph osd pool set {cachepool} min_read_recency_for_promote 2
- ceph osd pool set {cachepool} min_write_recency_for_promote 2
+.. prompt:: bash $
+
+ ceph osd pool set {cachepool} min_read_recency_for_promote 2
+ ceph osd pool set {cachepool} min_write_recency_for_promote 2
.. note:: The longer the period and the higher the
``min_read_recency_for_promote`` and
@@ -309,22 +331,29 @@ Absolute Sizing
The cache tiering agent can flush or evict objects based upon the total number
of bytes or the total number of objects. To specify a maximum number of bytes,
-execute the following::
+execute the following:
+
+.. prompt:: bash $
+
+ ceph osd pool set {cachepool} target_max_bytes {#bytes}
+
+For example, to flush or evict at 1 TB, execute the following:
- ceph osd pool set {cachepool} target_max_bytes {#bytes}
+.. prompt:: bash $
-For example, to flush or evict at 1 TB, execute the following::
+ ceph osd pool set hot-storage target_max_bytes 1099511627776
- ceph osd pool set hot-storage target_max_bytes 1099511627776
+To specify the maximum number of objects, execute the following:
+.. prompt:: bash $
-To specify the maximum number of objects, execute the following::
+ ceph osd pool set {cachepool} target_max_objects {#objects}
- ceph osd pool set {cachepool} target_max_objects {#objects}
+For example, to flush or evict at 1M objects, execute the following:
-For example, to flush or evict at 1M objects, execute the following::
+.. prompt:: bash $
- ceph osd pool set hot-storage target_max_objects 1000000
+ ceph osd pool set hot-storage target_max_objects 1000000
.. note:: Ceph is not able to determine the size of a cache pool automatically, so
the configuration on the absolute size is required here, otherwise the
@@ -341,59 +370,79 @@ The cache tiering agent can flush or evict objects relative to the size of the
cache pool(specified by ``target_max_bytes`` / ``target_max_objects`` in
`Absolute sizing`_). When the cache pool consists of a certain percentage of
modified (or dirty) objects, the cache tiering agent will flush them to the
-storage pool. To set the ``cache_target_dirty_ratio``, execute the following::
+storage pool. To set the ``cache_target_dirty_ratio``, execute the following:
- ceph osd pool set {cachepool} cache_target_dirty_ratio {0.0..1.0}
+.. prompt:: bash $
+
+ ceph osd pool set {cachepool} cache_target_dirty_ratio {0.0..1.0}
For example, setting the value to ``0.4`` will begin flushing modified
-(dirty) objects when they reach 40% of the cache pool's capacity::
+(dirty) objects when they reach 40% of the cache pool's capacity:
+
+.. prompt:: bash $
- ceph osd pool set hot-storage cache_target_dirty_ratio 0.4
+ ceph osd pool set hot-storage cache_target_dirty_ratio 0.4
When the dirty objects reaches a certain percentage of its capacity, flush dirty
-objects with a higher speed. To set the ``cache_target_dirty_high_ratio``::
+objects with a higher speed. To set the ``cache_target_dirty_high_ratio``:
+
+.. prompt:: bash $
+
+ ceph osd pool set {cachepool} cache_target_dirty_high_ratio {0.0..1.0}
- ceph osd pool set {cachepool} cache_target_dirty_high_ratio {0.0..1.0}
+For example, setting the value to ``0.6`` will begin aggressively flush dirty
+objects when they reach 60% of the cache pool's capacity. obviously, we'd
+better set the value between dirty_ratio and full_ratio:
-For example, setting the value to ``0.6`` will begin aggressively flush dirty objects
-when they reach 60% of the cache pool's capacity. obviously, we'd better set the value
-between dirty_ratio and full_ratio::
+.. prompt:: bash $
- ceph osd pool set hot-storage cache_target_dirty_high_ratio 0.6
+ ceph osd pool set hot-storage cache_target_dirty_high_ratio 0.6
When the cache pool reaches a certain percentage of its capacity, the cache
tiering agent will evict objects to maintain free capacity. To set the
-``cache_target_full_ratio``, execute the following::
+``cache_target_full_ratio``, execute the following:
- ceph osd pool set {cachepool} cache_target_full_ratio {0.0..1.0}
+.. prompt:: bash $
+
+ ceph osd pool set {cachepool} cache_target_full_ratio {0.0..1.0}
For example, setting the value to ``0.8`` will begin flushing unmodified
-(clean) objects when they reach 80% of the cache pool's capacity::
+(clean) objects when they reach 80% of the cache pool's capacity:
+
+.. prompt:: bash $
- ceph osd pool set hot-storage cache_target_full_ratio 0.8
+ ceph osd pool set hot-storage cache_target_full_ratio 0.8
Cache Age
---------
You can specify the minimum age of an object before the cache tiering agent
-flushes a recently modified (or dirty) object to the backing storage pool::
+flushes a recently modified (or dirty) object to the backing storage pool:
+
+.. prompt:: bash $
+
+ ceph osd pool set {cachepool} cache_min_flush_age {#seconds}
+
+For example, to flush modified (or dirty) objects after 10 minutes, execute the
+following:
- ceph osd pool set {cachepool} cache_min_flush_age {#seconds}
+.. prompt:: bash $
-For example, to flush modified (or dirty) objects after 10 minutes, execute
-the following::
+ ceph osd pool set hot-storage cache_min_flush_age 600
- ceph osd pool set hot-storage cache_min_flush_age 600
+You can specify the minimum age of an object before it will be evicted from the
+cache tier:
-You can specify the minimum age of an object before it will be evicted from
-the cache tier::
+.. prompt:: bash $
- ceph osd pool {cache-tier} cache_min_evict_age {#seconds}
+ ceph osd pool {cache-tier} cache_min_evict_age {#seconds}
-For example, to evict objects after 30 minutes, execute the following::
+For example, to evict objects after 30 minutes, execute the following:
- ceph osd pool set hot-storage cache_min_evict_age 1800
+.. prompt:: bash $
+
+ ceph osd pool set hot-storage cache_min_evict_age 1800
Removing a Cache Tier
@@ -409,22 +458,29 @@ Removing a Read-Only Cache
Since a read-only cache does not have modified data, you can disable
and remove it without losing any recent changes to objects in the cache.
-#. Change the cache-mode to ``none`` to disable it. ::
+#. Change the cache-mode to ``none`` to disable it.:
+
+ .. prompt:: bash
+
+ ceph osd tier cache-mode {cachepool} none
+
+ For example:
- ceph osd tier cache-mode {cachepool} none
+ .. prompt:: bash $
- For example::
+ ceph osd tier cache-mode hot-storage none
- ceph osd tier cache-mode hot-storage none
+#. Remove the cache pool from the backing pool.:
-#. Remove the cache pool from the backing pool. ::
+ .. prompt:: bash $
- ceph osd tier remove {storagepool} {cachepool}
+ ceph osd tier remove {storagepool} {cachepool}
- For example::
+ For example:
- ceph osd tier remove cold-storage hot-storage
+ .. prompt:: bash $
+ ceph osd tier remove cold-storage hot-storage
Removing a Writeback Cache
@@ -436,41 +492,57 @@ disable and remove it.
#. Change the cache mode to ``proxy`` so that new and modified objects will
- flush to the backing storage pool. ::
+ flush to the backing storage pool.:
- ceph osd tier cache-mode {cachepool} proxy
+ .. prompt:: bash $
- For example::
+ ceph osd tier cache-mode {cachepool} proxy
- ceph osd tier cache-mode hot-storage proxy
+ For example:
+ .. prompt:: bash $
-#. Ensure that the cache pool has been flushed. This may take a few minutes::
+ ceph osd tier cache-mode hot-storage proxy
- rados -p {cachepool} ls
+
+#. Ensure that the cache pool has been flushed. This may take a few minutes:
+
+ .. prompt:: bash $
+
+ rados -p {cachepool} ls
If the cache pool still has objects, you can flush them manually.
- For example::
+ For example:
+
+ .. prompt:: bash $
+
+ rados -p {cachepool} cache-flush-evict-all
+
+
+#. Remove the overlay so that clients will not direct traffic to the cache.:
+
+ .. prompt:: bash $
- rados -p {cachepool} cache-flush-evict-all
+ ceph osd tier remove-overlay {storagetier}
+ For example:
-#. Remove the overlay so that clients will not direct traffic to the cache. ::
+ .. prompt:: bash $
- ceph osd tier remove-overlay {storagetier}
+ ceph osd tier remove-overlay cold-storage
- For example::
- ceph osd tier remove-overlay cold-storage
+#. Finally, remove the cache tier pool from the backing storage pool.:
+ .. prompt:: bash $
-#. Finally, remove the cache tier pool from the backing storage pool. ::
+ ceph osd tier remove {storagepool} {cachepool}
- ceph osd tier remove {storagepool} {cachepool}
+ For example:
- For example::
+ .. prompt:: bash $
- ceph osd tier remove cold-storage hot-storage
+ ceph osd tier remove cold-storage hot-storage
.. _Create a Pool: ../pools#create-a-pool