summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAnthony D'Atri <anthonyeleven@users.noreply.github.com>2024-09-01 01:53:35 +0200
committerAnthony D'Atri <anthonyeleven@users.noreply.github.com>2024-09-01 01:53:35 +0200
commit8e743fc603310a802b8553af5e73266a9caf6554 (patch)
tree4dc26cdccb99fa64008e7b483915eb2a898ac157
parentMerge pull request #59512 from bugwz/alerts-receivers (diff)
downloadceph-8e743fc603310a802b8553af5e73266a9caf6554.tar.xz
ceph-8e743fc603310a802b8553af5e73266a9caf6554.zip
doc/radosgw: Improve config-ref.rst
Signed-off-by: Anthony D'Atri <anthonyeleven@users.noreply.github.com>
-rw-r--r--doc/radosgw/config-ref.rst36
1 files changed, 18 insertions, 18 deletions
diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst
index 070e00967ae..c678784249f 100644
--- a/doc/radosgw/config-ref.rst
+++ b/doc/radosgw/config-ref.rst
@@ -264,18 +264,18 @@ QoS settings
.. versionadded:: Nautilus
-The ``civetweb`` frontend has a threading model that uses a thread per
+The older and now non-default``civetweb`` frontend has a threading model that uses a thread per
connection and hence is automatically throttled by :confval:`rgw_thread_pool_size`
-configurable when it comes to accepting connections. The newer ``beast`` frontend is
-not restricted by the thread pool size when it comes to accepting new
-connections, so a scheduler abstraction is introduced in the Nautilus release
-to support future methods of scheduling requests.
+when accepting connections. The newer and default ``beast`` frontend is
+not limited by the thread pool size when it comes to accepting new
+connections, so a scheduler abstraction was introduced in the Nautilus release
+to support additional methods of scheduling requests.
-Currently the scheduler defaults to a throttler which throttles the active
-connections to a configured limit. QoS based on mClock is currently in an
-*experimental* phase and not recommended for production yet. Current
-implementation of *dmclock_client* op queue divides RGW ops on admin, auth
-(swift auth, sts) metadata & data requests.
+Currently the scheduler defaults to a throttler that limits active
+connections to a configured limit. QoS rate limiting based on mClock is currently
+*experimental* phase and not recommended for production. The current
+implementation of the *dmclock_client* op queue divides RGW ops into admin, auth
+(swift auth, sts) metadata, and data requests.
.. confval:: rgw_max_concurrent_requests
@@ -305,9 +305,9 @@ D4N Settings
============
D4N is a caching architecture that utilizes Redis to speed up S3 object storage
-operations by establishing shared databases between different RGW access points.
+operations by establishing shared databases among Ceph Object Gateway (RGW) daemons.
-Currently, the architecture can only function on one Redis instance at a time.
+The D4N architecture can only function on one Redis instance at a time.
The address is configurable and can be changed by accessing the parameters
below.
@@ -324,18 +324,18 @@ below.
Topic persistency settings
==========================
-Topic persistency will persistently push the notification until it succeeds.
+Topic persistency will repeatedly push notifications until they succeed.
For more information, see `Bucket Notifications`_.
The default behavior is to push indefinitely and as frequently as possible.
With these settings you can control how long and how often to retry an
-unsuccessful notification. How long to persistently push can be controlled
-by providing maximum time of retention or maximum amount of retries.
-Frequency of persistent push retries can be controlled with the sleep duration
+unsuccessful notification by configuring the maximum retention time and/or or
+maximum number of retries.
+The interval between push retries can be configured via the sleep duration
parameter.
-All of these values have default value 0 (persistent retention is indefinite,
-and retried as frequently as possible).
+All of these options default to the value `0`, which means that persistent
+retention is indefinite, and notifications are retried as frequently as possible.
.. confval:: rgw_topic_persistency_time_to_live
.. confval:: rgw_topic_persistency_max_retries