summaryrefslogtreecommitdiffstats
path: root/doc/cephadm
diff options
context:
space:
mode:
Diffstat (limited to 'doc/cephadm')
-rw-r--r--doc/cephadm/install.rst93
-rw-r--r--doc/cephadm/operations.rst2
-rw-r--r--doc/cephadm/services/index.rst2
-rw-r--r--doc/cephadm/services/mgmt-gateway.rst2
-rw-r--r--doc/cephadm/services/mon.rst6
-rw-r--r--doc/cephadm/services/monitoring.rst50
-rw-r--r--doc/cephadm/services/osd.rst58
-rw-r--r--doc/cephadm/services/rgw.rst26
8 files changed, 159 insertions, 80 deletions
diff --git a/doc/cephadm/install.rst b/doc/cephadm/install.rst
index 19f477c2cec..88a170fe6a3 100644
--- a/doc/cephadm/install.rst
+++ b/doc/cephadm/install.rst
@@ -1,8 +1,8 @@
.. _cephadm_deploying_new_cluster:
-============================
-Deploying a new Ceph cluster
-============================
+==========================================
+Using cephadm to Deploy a New Ceph Cluster
+==========================================
Cephadm creates a new Ceph cluster by bootstrapping a single
host, expanding the cluster to encompass any additional hosts, and
@@ -95,67 +95,80 @@ that case, you can install cephadm directly. For example:
.. _cephadm_install_curl:
-curl-based installation
------------------------
+Using curl to install cephadm
+-----------------------------
-* First, determine what version of Ceph you wish to install. You can use the releases
- page to find the `latest active releases <https://docs.ceph.com/en/latest/releases/#active-releases>`_.
- For example, we might find that ``18.2.1`` is the latest
- active release.
+#. Determine which version of Ceph you will install. Use the releases page to
+ find the `latest active releases
+ <https://docs.ceph.com/en/latest/releases/#active-releases>`_. For example,
+ you might find that ``18.2.1`` is the latest active release.
-* Use ``curl`` to fetch a build of cephadm for that release.
+#. Use ``curl`` to fetch a build of cephadm for that release.
- .. prompt:: bash #
- :substitutions:
+ .. prompt:: bash #
+ :substitutions:
- CEPH_RELEASE=18.2.0 # replace this with the active release
- curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
+ CEPH_RELEASE=18.2.0 # replace this with the active release
+ curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
- Ensure the ``cephadm`` file is executable:
+#. Use ``chmod`` to make the ``cephadm`` file executable:
- .. prompt:: bash #
+ .. prompt:: bash #
- chmod +x cephadm
+ chmod +x cephadm
- This file can be run directly from the current directory:
+ After ``chmod`` has been run on cephadm, it can be run from the current
+ directory:
- .. prompt:: bash #
+ .. prompt:: bash #
+
+ ./cephadm <arguments...>
- ./cephadm <arguments...>
+cephadm Requires Python 3.6 or Later
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-* If you encounter any issues with running cephadm due to errors including
- the message ``bad interpreter``, then you may not have Python or
- the correct version of Python installed. The cephadm tool requires Python 3.6
- or later. You can manually run cephadm with a particular version of Python by
- prefixing the command with your installed Python version. For example:
+* ``cephadm`` requires Python 3.6 or later. If you encounter difficulties
+ running ``cephadm``, then you may not have Python or the correct version of
+ Python installed. This includes any errors that include the message ``bad
+ interpreter``.
+
+ You can manually run cephadm with a particular version of Python by prefixing
+ the command with your installed Python version. For example:
.. prompt:: bash #
- :substitutions:
python3.8 ./cephadm <arguments...>
-* Although the standalone cephadm is sufficient to bootstrap a cluster, it is
- best to have the ``cephadm`` command installed on the host. To install
- the packages that provide the ``cephadm`` command, run the following
- commands:
+Installing cephadm on the Host
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- .. prompt:: bash #
- :substitutions:
+Although the standalone ``cephadm`` is sufficient to bootstrap a cluster, it is
+best to have the ``cephadm`` command installed on the host. To install the
+packages that provide the ``cephadm`` command, run the following commands:
- ./cephadm add-repo --release |stable-release|
- ./cephadm install
+#. Add the repository:
- Confirm that ``cephadm`` is now in your PATH by running ``which``:
+ .. prompt:: bash #
- .. prompt:: bash #
+ ./cephadm add-repo --release |stable-release|
+
+#. Run ``cephadm install``:
+
+ .. prompt:: bash #
+
+ ./cephadm install
+
+#. Confirm that ``cephadm`` is now in your PATH by running ``which``:
+
+ .. prompt:: bash #
- which cephadm
+ which cephadm
- A successful ``which cephadm`` command will return this:
+ A successful ``which cephadm`` command will return this:
- .. code-block:: bash
+ .. code-block:: bash
- /usr/sbin/cephadm
+ /usr/sbin/cephadm
Bootstrap a new cluster
=======================
diff --git a/doc/cephadm/operations.rst b/doc/cephadm/operations.rst
index 420ee655ac8..22d91c39b06 100644
--- a/doc/cephadm/operations.rst
+++ b/doc/cephadm/operations.rst
@@ -375,7 +375,7 @@ One or more hosts have failed the basic cephadm host check, which verifies
that (1) the host is reachable and cephadm can be executed there, and (2)
that the host satisfies basic prerequisites, like a working container
runtime (podman or docker) and working time synchronization.
-If this test fails, cephadm will no be able to manage services on that host.
+If this test fails, cephadm will not be able to manage services on that host.
You can manually run this check by running the following command:
diff --git a/doc/cephadm/services/index.rst b/doc/cephadm/services/index.rst
index 86a3fad8ab3..4df9933f8e7 100644
--- a/doc/cephadm/services/index.rst
+++ b/doc/cephadm/services/index.rst
@@ -357,6 +357,8 @@ Or in YAML:
* See :ref:`orchestrator-host-labels`
+.. _cephadm-services-placement-by-pattern-matching:
+
Placement by pattern matching
-----------------------------
diff --git a/doc/cephadm/services/mgmt-gateway.rst b/doc/cephadm/services/mgmt-gateway.rst
index 2b88d55952e..55c024817ae 100644
--- a/doc/cephadm/services/mgmt-gateway.rst
+++ b/doc/cephadm/services/mgmt-gateway.rst
@@ -183,7 +183,7 @@ The `mgmt-gateway` service internally makes use of nginx reverse proxy. The foll
::
- DEFAULT_NGINX_IMAGE = 'quay.io/ceph/nginx:1.26.1'
+ mgr/cephadm/container_image_nginx = 'quay.io/ceph/nginx:sclorg-nginx-126'
Admins can specify the image to be used by changing the `container_image_nginx` cephadm module option. If there were already
running daemon(s) you must redeploy the daemon(s) in order to have them actually use the new image.
diff --git a/doc/cephadm/services/mon.rst b/doc/cephadm/services/mon.rst
index 389dc450e95..86cc121c9d5 100644
--- a/doc/cephadm/services/mon.rst
+++ b/doc/cephadm/services/mon.rst
@@ -23,8 +23,8 @@ cluster to a particular subnet. ``cephadm`` designates that subnet as the
default subnet of the cluster. New monitor daemons will be assigned by
default to that subnet unless cephadm is instructed to do otherwise.
-If all of the ceph monitor daemons in your cluster are in the same subnet,
-manual administration of the ceph monitor daemons is not necessary.
+If all of the Ceph monitor daemons in your cluster are in the same subnet,
+manual administration of the Ceph monitor daemons is not necessary.
``cephadm`` will automatically add up to five monitors to the subnet, as
needed, as new hosts are added to the cluster.
@@ -35,7 +35,7 @@ the placement of daemons.
Designating a Particular Subnet for Monitors
--------------------------------------------
-To designate a particular IP subnet for use by ceph monitor daemons, use a
+To designate a particular IP subnet for use by Ceph monitor daemons, use a
command of the following form, including the subnet's address in `CIDR`_
format (e.g., ``10.1.2.0/24``):
diff --git a/doc/cephadm/services/monitoring.rst b/doc/cephadm/services/monitoring.rst
index a0187363b5e..ef987fd7bd3 100644
--- a/doc/cephadm/services/monitoring.rst
+++ b/doc/cephadm/services/monitoring.rst
@@ -173,24 +173,22 @@ the [ceph-users] mailing list in April of 2024. The thread can be viewed here:
``var/lib/ceph/{FSID}/cephadm.{DIGEST}``, where ``{DIGEST}`` is an alphanumeric
string representing the currently-running version of Ceph.
-To see the default container images, run a command of the following form:
+To see the default container images, run below command:
.. prompt:: bash #
- grep -E "DEFAULT*IMAGE" /var/lib/ceph/{FSID}/cephadm.{DIGEST}
+ cephadm list-images
-::
-
- DEFAULT_PROMETHEUS_IMAGE = 'quay.io/prometheus/prometheus:v2.51.0'
- DEFAULT_LOKI_IMAGE = 'docker.io/grafana/loki:2.9.5'
- DEFAULT_PROMTAIL_IMAGE = 'docker.io/grafana/promtail:2.9.5'
- DEFAULT_NODE_EXPORTER_IMAGE = 'quay.io/prometheus/node-exporter:v1.7.0'
- DEFAULT_ALERT_MANAGER_IMAGE = 'quay.io/prometheus/alertmanager:v0.27.0'
- DEFAULT_GRAFANA_IMAGE = 'quay.io/ceph/grafana:10.4.0'
Default monitoring images are specified in
-``/src/cephadm/cephadmlib/constants.py`` and in
-``/src/pybind/mgr/cephadm/module.py``.
+``/src/python-common/ceph/cephadm/images.py``.
+
+
+.. autoclass:: ceph.cephadm.images.DefaultImages
+ :members:
+ :undoc-members:
+ :exclude-members: desc, image_ref, key
+
Using custom images
~~~~~~~~~~~~~~~~~~~
@@ -304,14 +302,24 @@ Option names
""""""""""""
The following templates for files that will be generated by cephadm can be
-overridden. These are the names to be used when storing with ``ceph config-key
-set``:
+overridden. These are the names to be used when storing with ``ceph config-key set``:
- ``services/alertmanager/alertmanager.yml``
+- ``services/alertmanager/web.yml``
- ``services/grafana/ceph-dashboard.yml``
- ``services/grafana/grafana.ini``
+- ``services/ingress/haproxy.cfg``
+- ``services/ingress/keepalived.conf``
+- ``services/iscsi/iscsi-gateway.cfg``
+- ``services/mgmt-gateway/external_server.conf``
+- ``services/mgmt-gateway/internal_server.conf``
+- ``services/mgmt-gateway/nginx.conf``
+- ``services/nfs/ganesha.conf``
+- ``services/node-exporter/web.yml``
+- ``services/nvmeof/ceph-nvmeof.conf``
+- ``services/oauth2-proxy/oauth2-proxy.conf``
- ``services/prometheus/prometheus.yml``
-- ``services/prometheus/alerting/custom_alerts.yml``
+- ``services/prometheus/web.yml``
- ``services/loki.yml``
- ``services/promtail.yml``
@@ -319,9 +327,21 @@ You can look up the file templates that are currently used by cephadm in
``src/pybind/mgr/cephadm/templates``:
- ``services/alertmanager/alertmanager.yml.j2``
+- ``services/alertmanager/web.yml.j2``
- ``services/grafana/ceph-dashboard.yml.j2``
- ``services/grafana/grafana.ini.j2``
+- ``services/ingress/haproxy.cfg.j2``
+- ``services/ingress/keepalived.conf.j2``
+- ``services/iscsi/iscsi-gateway.cfg.j2``
+- ``services/mgmt-gateway/external_server.conf.j2``
+- ``services/mgmt-gateway/internal_server.conf.j2``
+- ``services/mgmt-gateway/nginx.conf.j2``
+- ``services/nfs/ganesha.conf.j2``
+- ``services/node-exporter/web.yml.j2``
+- ``services/nvmeof/ceph-nvmeof.conf.j2``
+- ``services/oauth2-proxy/oauth2-proxy.conf.j2``
- ``services/prometheus/prometheus.yml.j2``
+- ``services/prometheus/web.yml.j2``
- ``services/loki.yml.j2``
- ``services/promtail.yml.j2``
diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst
index 9c0b4d2b495..90ebd86f897 100644
--- a/doc/cephadm/services/osd.rst
+++ b/doc/cephadm/services/osd.rst
@@ -198,6 +198,18 @@ There are a few ways to create new OSDs:
.. warning:: When deploying new OSDs with ``cephadm``, ensure that the ``ceph-osd`` package is not already installed on the target host. If it is installed, conflicts may arise in the management and control of the OSD that may lead to errors or unexpected behavior.
+* OSDs created via ``ceph orch daemon add`` are by default not added to the orchestrator's OSD service, they get added to 'osd' service. To attach an OSD to a different, existing OSD service, issue a command of the following form:
+
+ .. prompt:: bash *
+
+ ceph orch osd set-spec-affinity <service_name> <osd_id(s)>
+
+ For example:
+
+ .. prompt:: bash #
+
+ ceph orch osd set-spec-affinity osd.default_drive_group 0 1
+
Dry Run
-------
@@ -478,22 +490,27 @@ for that OSD and also set a specific memory target. For example,
Advanced OSD Service Specifications
===================================
-:ref:`orchestrator-cli-service-spec`\s of type ``osd`` are a way to describe a
-cluster layout, using the properties of disks. Service specifications give the
-user an abstract way to tell Ceph which disks should turn into OSDs with which
-configurations, without knowing the specifics of device names and paths.
+:ref:`orchestrator-cli-service-spec`\s of type ``osd`` provide a way to use the
+properties of disks to describe a Ceph cluster's layout. Service specifications
+are an abstraction used to tell Ceph which disks it should transform into OSDs
+and which configurations to apply to those OSDs.
+:ref:`orchestrator-cli-service-spec`\s make it possible to target these disks
+for transformation into OSDs even when the Ceph cluster operator does not know
+the specific device names and paths associated with those disks.
-Service specifications make it possible to define a yaml or json file that can
-be used to reduce the amount of manual work involved in creating OSDs.
+:ref:`orchestrator-cli-service-spec`\s make it possible to define a ``.yaml``
+or ``.json`` file that can be used to reduce the amount of manual work involved
+in creating OSDs.
.. note::
- It is recommended that advanced OSD specs include the ``service_id`` field
- set. The plain ``osd`` service with no service id is where OSDs created
- using ``ceph orch daemon add`` or ``ceph orch apply osd --all-available-devices``
- are placed. Not including a ``service_id`` in your OSD spec would mix
- the OSDs from your spec with those OSDs and potentially overwrite services
- specs created by cephadm to track them. Newer versions of cephadm will even
- block creation of advanced OSD specs without the service_id present
+ We recommend that advanced OSD specs include the ``service_id`` field set.
+ OSDs created using ``ceph orch daemon add`` or ``ceph orch apply osd
+ --all-available-devices`` are placed in the plain ``osd`` service. Failing
+ to include a ``service_id`` in your OSD spec causes the Ceph cluster to mix
+ the OSDs from your spec with those OSDs, which can potentially result in the
+ overwriting of service specs created by ``cephadm`` to track them. Newer
+ versions of ``cephadm`` will even block creation of advanced OSD specs that
+ do not include the ``service_id``.
For example, instead of running the following command:
@@ -501,8 +518,8 @@ For example, instead of running the following command:
ceph orch daemon add osd *<host>*:*<path-to-device>*
-for each device and each host, we can define a yaml or json file that allows us
-to describe the layout. Here's the most basic example.
+for each device and each host, we can define a ``.yaml`` or ``.json`` file that
+allows us to describe the layout. Here is the most basic example:
Create a file called (for example) ``osd_spec.yml``:
@@ -520,17 +537,18 @@ This means :
#. Turn any available device (ceph-volume decides what 'available' is) into an
OSD on all hosts that match the glob pattern '*'. (The glob pattern matches
- against the registered hosts from `host ls`) A more detailed section on
- host_pattern is available below.
+ against the registered hosts from `ceph orch host ls`) See
+ :ref:`cephadm-services-placement-by-pattern-matching` for more on using
+ ``host_pattern``-matching to turn devices into OSDs.
-#. Then pass it to `osd create` like this:
+#. Pass ``osd_spec.yml`` to ``osd create`` by using the following command:
.. prompt:: bash [monitor.1]#
ceph orch apply -i /path/to/osd_spec.yml
- This instruction will be issued to all the matching hosts, and will deploy
- these OSDs.
+ This instruction is issued to all the matching hosts, and will deploy these
+ OSDs.
Setups more complex than the one specified by the ``all`` filter are
possible. See :ref:`osd_filters` for details.
diff --git a/doc/cephadm/services/rgw.rst b/doc/cephadm/services/rgw.rst
index ed0b149365a..3df8ed2fc56 100644
--- a/doc/cephadm/services/rgw.rst
+++ b/doc/cephadm/services/rgw.rst
@@ -173,6 +173,32 @@ Then apply this yaml document:
Note the value of ``rgw_frontend_ssl_certificate`` is a literal string as
indicated by a ``|`` character preserving newline characters.
+Disabling multisite sync traffic
+--------------------------------
+
+There is an RGW config option called ``rgw_run_sync_thread`` that tells the
+RGW daemon to not transmit multisite replication data. This is useful if you want
+that RGW daemon to be dedicated to I/O rather than multisite sync operations.
+The RGW spec file includes a setting ``disable_multisite_sync_traffic`` that when
+set to "True" will tell cephadm to set ``rgw_run_sync_thread`` to false for all
+RGW daemons deployed for that RGW service. For example
+
+.. code-block:: yaml
+
+ service_type: rgw
+ service_id: foo
+ placement:
+ label: rgw
+ spec:
+ rgw_realm: myrealm
+ rgw_zone: myzone
+ rgw_zonegroup: myzg
+ disable_multisite_sync_traffic: True
+
+.. note:: This will only stop the RGW daemon(s) from sending replication data.
+ The daemon can still receive replication data unless it has been removed
+ from the zonegroup and zone replication endpoints.
+
Service specification
---------------------