diff options
author | Anthony D'Atri <anthonyeleven@users.noreply.github.com> | 2025-01-17 02:46:57 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2025-01-17 02:46:57 +0100 |
commit | 90e1ab89a4615feede66dd05741d19c9ce4496eb (patch) | |
tree | 4e13beb29c9b9b1f41f2254dab6d91ab8dca9968 | |
parent | Merge pull request #61412 from gbregman/main (diff) | |
parent | doc/radosgw/config-ref: fix lifecycle workload tuning description (diff) | |
download | ceph-90e1ab89a4615feede66dd05741d19c9ce4496eb.tar.xz ceph-90e1ab89a4615feede66dd05741d19c9ce4496eb.zip |
Merge pull request #61369 from laimis9133/patch-2
doc/radosgw/config-ref: fix lifecycle workload tuning description
-rw-r--r-- | doc/radosgw/config-ref.rst | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst index b4aa56fff54..405bc727208 100644 --- a/doc/radosgw/config-ref.rst +++ b/doc/radosgw/config-ref.rst @@ -75,10 +75,11 @@ aggressiveness of lifecycle processing: .. confval:: rgw_lc_max_wp_worker These values can be tuned based upon your specific workload to further increase the -aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands) -you would look at increasing the :confval:`rgw_lc_max_worker` value from the default value of 3 whereas for a -workload with a smaller number of buckets but higher number of objects (hundreds of thousands) -per bucket you would consider increasing :confval:`rgw_lc_max_wp_worker` from the default value of 3. +aggressiveness of lifecycle processing. For a workload with a large number of buckets (thousands) +you would raise the number of workers by increasing :confval:`rgw_lc_max_worker` +from the default value of 3. Whereas for a workload with a higher number of objects per bucket +(hundreds of thousands) you would raise the number of parallel threads +by increasing :confval:`rgw_lc_max_wp_worker` from the default value of 3. .. note:: When looking to tune either of these specific values please validate the current Cluster performance and Ceph Object Gateway utilization before increasing. |