summaryrefslogtreecommitdiffstats
path: root/doc/install/ceph-deploy
diff options
context:
space:
mode:
Diffstat (limited to 'doc/install/ceph-deploy')
-rw-r--r--doc/install/ceph-deploy/index.rst56
-rw-r--r--doc/install/ceph-deploy/install-ceph-gateway.rst615
-rw-r--r--doc/install/ceph-deploy/quick-ceph-deploy.rst346
-rw-r--r--doc/install/ceph-deploy/quick-cephfs.rst212
-rw-r--r--doc/install/ceph-deploy/quick-common.rst20
-rw-r--r--doc/install/ceph-deploy/quick-rgw-old.rst30
-rw-r--r--doc/install/ceph-deploy/quick-rgw.rst101
-rw-r--r--doc/install/ceph-deploy/quick-start-preflight.rst364
-rw-r--r--doc/install/ceph-deploy/upgrading-ceph.rst235
9 files changed, 1979 insertions, 0 deletions
diff --git a/doc/install/ceph-deploy/index.rst b/doc/install/ceph-deploy/index.rst
new file mode 100644
index 00000000000..9579d3b3be8
--- /dev/null
+++ b/doc/install/ceph-deploy/index.rst
@@ -0,0 +1,56 @@
+.. _ceph-deploy-index:
+
+============================
+ Installation (ceph-deploy)
+============================
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Preflight</h3>
+
+A :term:`Ceph Client` and a :term:`Ceph Node` may require some basic
+configuration work prior to deploying a Ceph Storage Cluster. You can also
+avail yourself of help by getting involved in the Ceph community.
+
+.. toctree::
+
+ Preflight <quick-start-preflight>
+
+.. raw:: html
+
+ </td><td><h3>Step 2: Storage Cluster</h3>
+
+Once you have completed your preflight checklist, you should be able to begin
+deploying a Ceph Storage Cluster.
+
+.. toctree::
+
+ Storage Cluster Quick Start <quick-ceph-deploy>
+
+
+.. raw:: html
+
+ </td><td><h3>Step 3: Ceph Client(s)</h3>
+
+Most Ceph users don't store objects directly in the Ceph Storage Cluster. They typically use at least one of
+Ceph Block Devices, the Ceph File System, and Ceph Object Storage.
+
+.. toctree::
+
+ Block Device Quick Start <../../start/quick-rbd>
+ Filesystem Quick Start <quick-cephfs>
+ Object Storage Quick Start <quick-rgw>
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+
+.. toctree::
+ :hidden:
+
+ Upgrading Ceph <upgrading-ceph>
+ Install Ceph Object Gateway <install-ceph-gateway>
+
+
diff --git a/doc/install/ceph-deploy/install-ceph-gateway.rst b/doc/install/ceph-deploy/install-ceph-gateway.rst
new file mode 100644
index 00000000000..fe5b6b574cd
--- /dev/null
+++ b/doc/install/ceph-deploy/install-ceph-gateway.rst
@@ -0,0 +1,615 @@
+===========================
+Install Ceph Object Gateway
+===========================
+
+As of `firefly` (v0.80), Ceph Object Gateway is running on Civetweb (embedded
+into the ``ceph-radosgw`` daemon) instead of Apache and FastCGI. Using Civetweb
+simplifies the Ceph Object Gateway installation and configuration.
+
+.. note:: To run the Ceph Object Gateway service, you should have a running
+ Ceph storage cluster, and the gateway host should have access to the
+ public network.
+
+.. note:: In version 0.80, the Ceph Object Gateway does not support SSL. You
+ may setup a reverse proxy server with SSL to dispatch HTTPS requests
+ as HTTP requests to CivetWeb.
+
+Execute the Pre-Installation Procedure
+--------------------------------------
+
+See Preflight_ and execute the pre-installation procedures on your Ceph Object
+Gateway node. Specifically, you should disable ``requiretty`` on your Ceph
+Deploy user, set SELinux to ``Permissive`` and set up a Ceph Deploy user with
+password-less ``sudo``. For Ceph Object Gateways, you will need to open the
+port that Civetweb will use in production.
+
+.. note:: Civetweb runs on port ``7480`` by default.
+
+Install Ceph Object Gateway
+---------------------------
+
+From the working directory of your administration server, install the Ceph
+Object Gateway package on the Ceph Object Gateway node. For example::
+
+ ceph-deploy install --rgw <gateway-node1> [<gateway-node2> ...]
+
+The ``ceph-common`` package is a dependency, so ``ceph-deploy`` will install
+this too. The ``ceph`` CLI tools are intended for administrators. To make your
+Ceph Object Gateway node an administrator node, execute the following from the
+working directory of your administration server::
+
+ ceph-deploy admin <node-name>
+
+Create a Gateway Instance
+-------------------------
+
+From the working directory of your administration server, create an instance of
+the Ceph Object Gateway on the Ceph Object Gateway. For example::
+
+ ceph-deploy rgw create <gateway-node1>
+
+Once the gateway is running, you should be able to access it on port ``7480``
+with an unauthenticated request like this::
+
+ http://client-node:7480
+
+If the gateway instance is working properly, you should receive a response like
+this::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <Owner>
+ <ID>anonymous</ID>
+ <DisplayName></DisplayName>
+ </Owner>
+ <Buckets>
+ </Buckets>
+ </ListAllMyBucketsResult>
+
+If at any point you run into trouble and you want to start over, execute the
+following to purge the configuration::
+
+ ceph-deploy purge <gateway-node1> [<gateway-node2>]
+ ceph-deploy purgedata <gateway-node1> [<gateway-node2>]
+
+If you execute ``purge``, you must re-install Ceph.
+
+Change the Default Port
+-----------------------
+
+Civetweb runs on port ``7480`` by default. To change the default port (e.g., to
+port ``80``), modify your Ceph configuration file in the working directory of
+your administration server. Add a section entitled
+``[client.rgw.<gateway-node>]``, replacing ``<gateway-node>`` with the short
+node name of your Ceph Object Gateway node (i.e., ``hostname -s``).
+
+.. note:: As of version 11.0.1, the Ceph Object Gateway **does** support SSL.
+ See `Using SSL with Civetweb`_ for information on how to set that up.
+
+For example, if your node name is ``gateway-node1``, add a section like this
+after the ``[global]`` section::
+
+ [client.rgw.gateway-node1]
+ rgw_frontends = "civetweb port=80"
+
+.. note:: Ensure that you leave no whitespace between ``port=<port-number>`` in
+ the ``rgw_frontends`` key/value pair. The ``[client.rgw.gateway-node1]``
+ heading identifies this portion of the Ceph configuration file as
+ configuring a Ceph Storage Cluster client where the client type is a Ceph
+ Object Gateway (i.e., ``rgw``), and the name of the instance is
+ ``gateway-node1``.
+
+Push the updated configuration file to your Ceph Object Gateway node
+(and other Ceph nodes)::
+
+ ceph-deploy --overwrite-conf config push <gateway-node> [<other-nodes>]
+
+To make the new port setting take effect, restart the Ceph Object
+Gateway::
+
+ sudo systemctl restart ceph-radosgw.service
+
+Finally, check to ensure that the port you selected is open on the node's
+firewall (e.g., port ``80``). If it is not open, add the port and reload the
+firewall configuration. If you use the ``firewalld`` daemon, execute::
+
+ sudo firewall-cmd --list-all
+ sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
+ sudo firewall-cmd --reload
+
+If you use ``iptables``, execute::
+
+ sudo iptables --list
+ sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
+
+Replace ``<iface>`` and ``<ip-address>/<netmask>`` with the relevant values for
+your Ceph Object Gateway node.
+
+Once you have finished configuring ``iptables``, ensure that you make the
+change persistent so that it will be in effect when your Ceph Object Gateway
+node reboots. Execute::
+
+ sudo apt-get install iptables-persistent
+
+A terminal UI will open up. Select ``yes`` for the prompts to save current
+``IPv4`` iptables rules to ``/etc/iptables/rules.v4`` and current ``IPv6``
+iptables rules to ``/etc/iptables/rules.v6``.
+
+The ``IPv4`` iptables rule that you set in the earlier step will be loaded in
+``/etc/iptables/rules.v4`` and will be persistent across reboots.
+
+If you add a new ``IPv4`` iptables rule after installing
+``iptables-persistent`` you will have to add it to the rule file. In such case,
+execute the following as the ``root`` user::
+
+ iptables-save > /etc/iptables/rules.v4
+
+Using SSL with Civetweb
+-----------------------
+.. _Using SSL with Civetweb:
+
+Before using SSL with civetweb, you will need a certificate that will match
+the host name that that will be used to access the Ceph Object Gateway.
+You may wish to obtain one that has `subject alternate name` fields for
+more flexibility. If you intend to use S3-style subdomains
+(`Add Wildcard to DNS`_), you will need a `wildcard` certificate.
+
+Civetweb requires that the server key, server certificate, and any other
+CA or intermediate certificates be supplied in one file. Each of these
+items must be in `pem` form. Because the combined file contains the
+secret key, it should be protected from unauthorized access.
+
+To configure ssl operation, append ``s`` to the port number. For eg::
+
+ [client.rgw.gateway-node1]
+ rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/keyandcert.pem
+
+.. versionadded :: Luminous
+
+Furthermore, civetweb can be made to bind to multiple ports, by separating them
+with ``+`` in the configuration. This allows for use cases where both ssl and
+non-ssl connections are hosted by a single rgw instance. For eg::
+
+ [client.rgw.gateway-node1]
+ rgw_frontends = civetweb port=80+443s ssl_certificate=/etc/ceph/private/keyandcert.pem
+
+Additional Civetweb Configuration Options
+-----------------------------------------
+Some additional configuration options can be adjusted for the embedded Civetweb web server
+in the **Ceph Object Gateway** section of the ``ceph.conf`` file.
+A list of supported options, including an example, can be found in the `HTTP Frontends`_.
+
+Migrating from Apache to Civetweb
+---------------------------------
+
+If you are running the Ceph Object Gateway on Apache and FastCGI with Ceph
+Storage v0.80 or above, you are already running Civetweb--it starts with the
+``ceph-radosgw`` daemon and it's running on port 7480 by default so that it
+doesn't conflict with your Apache and FastCGI installation and other commonly
+used web service ports. Migrating to use Civetweb basically involves removing
+your Apache installation. Then, you must remove Apache and FastCGI settings
+from your Ceph configuration file and reset ``rgw_frontends`` to Civetweb.
+
+Referring back to the description for installing a Ceph Object Gateway with
+``ceph-deploy``, notice that the configuration file only has one setting
+``rgw_frontends`` (and that's assuming you elected to change the default port).
+The ``ceph-deploy`` utility generates the data directory and the keyring for
+you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-instance}``. The daemon
+looks in default locations, whereas you may have specified different settings
+in your Ceph configuration file. Since you already have keys and a data
+directory, you will want to maintain those paths in your Ceph configuration
+file if you used something other than default paths.
+
+A typical Ceph Object Gateway configuration file for an Apache-based deployment
+looks something similar as the following:
+
+On Red Hat Enterprise Linux::
+
+ [client.radosgw.gateway-node1]
+ host = {hostname}
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ rgw socket path = ""
+ log file = /var/log/radosgw/client.radosgw.gateway-node1.log
+ rgw frontends = fastcgi socket\_port=9000 socket\_host=0.0.0.0
+ rgw print continue = false
+
+On Ubuntu::
+
+ [client.radosgw.gateway-node]
+ host = {hostname}
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
+ log file = /var/log/radosgw/client.radosgw.gateway-node1.log
+
+To modify it for use with Civetweb, simply remove the Apache-specific settings
+such as ``rgw_socket_path`` and ``rgw_print_continue``. Then, change the
+``rgw_frontends`` setting to reflect Civetweb rather than the Apache FastCGI
+front end and specify the port number you intend to use. For example::
+
+ [client.radosgw.gateway-node1]
+ host = {hostname}
+ keyring = /etc/ceph/ceph.client.radosgw.keyring
+ log file = /var/log/radosgw/client.radosgw.gateway-node1.log
+ rgw_frontends = civetweb port=80
+
+Finally, restart the Ceph Object Gateway. On Red Hat Enterprise Linux execute::
+
+ sudo systemctl restart ceph-radosgw.service
+
+On Ubuntu execute::
+
+ sudo service radosgw restart id=rgw.<short-hostname>
+
+If you used a port number that is not open, you will also need to open that
+port on your firewall.
+
+Configure Bucket Sharding
+-------------------------
+
+A Ceph Object Gateway stores bucket index data in the ``index_pool``, which
+defaults to ``.rgw.buckets.index``. Sometimes users like to put many objects
+(hundreds of thousands to millions of objects) in a single bucket. If you do
+not use the gateway administration interface to set quotas for the maximum
+number of objects per bucket, the bucket index can suffer significant
+performance degradation when users place large numbers of objects into a
+bucket.
+
+In Ceph 0.94, you may shard bucket indices to help prevent performance
+bottlenecks when you allow a high number of objects per bucket. The
+``rgw_override_bucket_index_max_shards`` setting allows you to set a maximum
+number of shards per bucket. The default value is ``0``, which means bucket
+index sharding is off by default.
+
+To turn bucket index sharding on, set ``rgw_override_bucket_index_max_shards``
+to a value greater than ``0``.
+
+For simple configurations, you may add ``rgw_override_bucket_index_max_shards``
+to your Ceph configuration file. Add it under ``[global]`` to create a
+system-wide value. You can also set it for each instance in your Ceph
+configuration file.
+
+Once you have changed your bucket sharding configuration in your Ceph
+configuration file, restart your gateway. On Red Hat Enterprise Linux execute::
+
+ sudo systemctl restart ceph-radosgw.service
+
+On Ubuntu execute::
+
+ sudo service radosgw restart id=rgw.<short-hostname>
+
+For federated configurations, each zone may have a different ``index_pool``
+setting for failover. To make the value consistent for a zonegroup's zones, you
+may set ``rgw_override_bucket_index_max_shards`` in a gateway's zonegroup
+configuration. For example::
+
+ radosgw-admin zonegroup get > zonegroup.json
+
+Open the ``zonegroup.json`` file and edit the ``bucket_index_max_shards`` setting
+for each named zone. Save the ``zonegroup.json`` file and reset the zonegroup.
+For example::
+
+ radosgw-admin zonegroup set < zonegroup.json
+
+Once you have updated your zonegroup, update and commit the period.
+For example::
+
+ radosgw-admin period update --commit
+
+.. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH
+ rule of SSD-based OSDs may also help with bucket index performance.
+
+Add Wildcard to DNS
+-------------------
+.. _Add Wildcard to DNS:
+
+To use Ceph with S3-style subdomains (e.g., bucket-name.domain-name.com), you
+need to add a wildcard to the DNS record of the DNS server you use with the
+``ceph-radosgw`` daemon.
+
+The address of the DNS must also be specified in the Ceph configuration file
+with the ``rgw dns name = {hostname}`` setting.
+
+For ``dnsmasq``, add the following address setting with a dot (.) prepended to
+the host name::
+
+ address=/.{hostname-or-fqdn}/{host-ip-address}
+
+For example::
+
+ address=/.gateway-node1/192.168.122.75
+
+
+For ``bind``, add a wildcard to the DNS record. For example::
+
+ $TTL 604800
+ @ IN SOA gateway-node1. root.gateway-node1. (
+ 2 ; Serial
+ 604800 ; Refresh
+ 86400 ; Retry
+ 2419200 ; Expire
+ 604800 ) ; Negative Cache TTL
+ ;
+ @ IN NS gateway-node1.
+ @ IN A 192.168.122.113
+ * IN CNAME @
+
+Restart your DNS server and ping your server with a subdomain to ensure that
+your DNS configuration works as expected::
+
+ ping mybucket.{hostname}
+
+For example::
+
+ ping mybucket.gateway-node1
+
+Add Debugging (if needed)
+-------------------------
+
+Once you finish the setup procedure, if you encounter issues with your
+configuration, you can add debugging to the ``[global]`` section of your Ceph
+configuration file and restart the gateway(s) to help troubleshoot any
+configuration issues. For example::
+
+ [global]
+ #append the following in the global section.
+ debug ms = 1
+ debug rgw = 20
+
+Using the Gateway
+-----------------
+
+To use the REST interfaces, first create an initial Ceph Object Gateway user
+for the S3 interface. Then, create a subuser for the Swift interface. You then
+need to verify if the created users are able to access the gateway.
+
+Create a RADOSGW User for S3 Access
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+A ``radosgw`` user needs to be created and granted access. The command ``man
+radosgw-admin`` will provide information on additional command options.
+
+To create the user, execute the following on the ``gateway host``::
+
+ sudo radosgw-admin user create --uid="testuser" --display-name="First User"
+
+The output of the command will be something like the following::
+
+ {
+ "user_id": "testuser",
+ "display_name": "First User",
+ "email": "",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [],
+ "keys": [{
+ "user": "testuser",
+ "access_key": "I0PJDPCIYZ665MW88W9R",
+ "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
+ }],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "temp_url_keys": []
+ }
+
+.. note:: The values of ``keys->access_key`` and ``keys->secret_key`` are
+ needed for access validation.
+
+.. important:: Check the key output. Sometimes ``radosgw-admin`` generates a
+ JSON escape character ``\`` in ``access_key`` or ``secret_key``
+ and some clients do not know how to handle JSON escape
+ characters. Remedies include removing the JSON escape character
+ ``\``, encapsulating the string in quotes, regenerating the key
+ and ensuring that it does not have a JSON escape character or
+ specify the key and secret manually. Also, if ``radosgw-admin``
+ generates a JSON escape character ``\`` and a forward slash ``/``
+ together in a key, like ``\/``, only remove the JSON escape
+ character ``\``. Do not remove the forward slash ``/`` as it is
+ a valid character in the key.
+
+Create a Swift User
+^^^^^^^^^^^^^^^^^^^
+
+A Swift subuser needs to be created if this kind of access is needed. Creating
+a Swift user is a two step process. The first step is to create the user. The
+second is to create the secret key.
+
+Execute the following steps on the ``gateway host``:
+
+Create the Swift user::
+
+ sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
+
+The output will be something like the following::
+
+ {
+ "user_id": "testuser",
+ "display_name": "First User",
+ "email": "",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [{
+ "id": "testuser:swift",
+ "permissions": "full-control"
+ }],
+ "keys": [{
+ "user": "testuser:swift",
+ "access_key": "3Y1LNW4Q6X0Y53A52DET",
+ "secret_key": ""
+ }, {
+ "user": "testuser",
+ "access_key": "I0PJDPCIYZ665MW88W9R",
+ "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
+ }],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "temp_url_keys": []
+ }
+
+Create the secret key::
+
+ sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
+
+The output will be something like the following::
+
+ {
+ "user_id": "testuser",
+ "display_name": "First User",
+ "email": "",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [{
+ "id": "testuser:swift",
+ "permissions": "full-control"
+ }],
+ "keys": [{
+ "user": "testuser:swift",
+ "access_key": "3Y1LNW4Q6X0Y53A52DET",
+ "secret_key": ""
+ }, {
+ "user": "testuser",
+ "access_key": "I0PJDPCIYZ665MW88W9R",
+ "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
+ }],
+ "swift_keys": [{
+ "user": "testuser:swift",
+ "secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA"
+ }],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "temp_url_keys": []
+ }
+
+Access Verification
+^^^^^^^^^^^^^^^^^^^
+
+Test S3 Access
+""""""""""""""
+
+You need to write and run a Python test script for verifying S3 access. The S3
+access test script will connect to the ``radosgw``, create a new bucket and
+list all buckets. The values for ``aws_access_key_id`` and
+``aws_secret_access_key`` are taken from the values of ``access_key`` and
+``secret_key`` returned by the ``radosgw-admin`` command.
+
+Execute the following steps:
+
+#. You will need to install the ``python-boto`` package::
+
+ sudo yum install python-boto
+
+#. Create the Python script::
+
+ vi s3test.py
+
+#. Add the following contents to the file::
+
+ import boto.s3.connection
+
+ access_key = 'I0PJDPCIYZ665MW88W9R'
+ secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA'
+ conn = boto.connect_s3(
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ host='{hostname}', port={port},
+ is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
+ )
+
+ bucket = conn.create_bucket('my-new-bucket')
+ for bucket in conn.get_all_buckets():
+ print "{name} {created}".format(
+ name=bucket.name,
+ created=bucket.creation_date,
+ )
+
+
+ Replace ``{hostname}`` with the hostname of the host where you have
+ configured the gateway service i.e., the ``gateway host``. Replace ``{port}``
+ with the port number you are using with Civetweb.
+
+#. Run the script::
+
+ python s3test.py
+
+ The output will be something like the following::
+
+ my-new-bucket 2015-02-16T17:09:10.000Z
+
+Test swift access
+"""""""""""""""""
+
+Swift access can be verified via the ``swift`` command line client. The command
+``man swift`` will provide more information on available command line options.
+
+To install ``swift`` client, execute the following commands. On Red Hat
+Enterprise Linux::
+
+ sudo yum install python-setuptools
+ sudo easy_install pip
+ sudo pip install --upgrade setuptools
+ sudo pip install --upgrade python-swiftclient
+
+On Debian-based distributions::
+
+ sudo apt-get install python-setuptools
+ sudo easy_install pip
+ sudo pip install --upgrade setuptools
+ sudo pip install --upgrade python-swiftclient
+
+To test swift access, execute the following::
+
+ swift -V 1 -A http://{IP ADDRESS}:{port}/auth -U testuser:swift -K '{swift_secret_key}' list
+
+Replace ``{IP ADDRESS}`` with the public IP address of the gateway server and
+``{swift_secret_key}`` with its value from the output of ``radosgw-admin key
+create`` command executed for the ``swift`` user. Replace {port} with the port
+number you are using with Civetweb (e.g., ``7480`` is the default). If you
+don't replace the port, it will default to port ``80``.
+
+For example::
+
+ swift -V 1 -A http://10.19.143.116:7480/auth -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list
+
+The output should be::
+
+ my-new-bucket
+
+.. _Preflight: ../../start/quick-start-preflight
+.. _HTTP Frontends: ../../radosgw/frontends
diff --git a/doc/install/ceph-deploy/quick-ceph-deploy.rst b/doc/install/ceph-deploy/quick-ceph-deploy.rst
new file mode 100644
index 00000000000..c4589c7b3d3
--- /dev/null
+++ b/doc/install/ceph-deploy/quick-ceph-deploy.rst
@@ -0,0 +1,346 @@
+=============================
+ Storage Cluster Quick Start
+=============================
+
+If you haven't completed your `Preflight Checklist`_, do that first. This
+**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
+on your admin node. Create a three Ceph Node cluster so you can
+explore Ceph functionality.
+
+.. include:: quick-common.rst
+
+As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
+Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
+by adding a fourth Ceph OSD Daemon, and two more Ceph Monitors.
+For best results, create a directory on your admin node for maintaining the
+configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
+
+ mkdir my-cluster
+ cd my-cluster
+
+The ``ceph-deploy`` utility will output files to the current directory. Ensure you
+are in this directory when executing ``ceph-deploy``.
+
+.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
+ if you are logged in as a different user, because it will not issue ``sudo``
+ commands needed on the remote host.
+
+
+Starting over
+=============
+
+If at any point you run into trouble and you want to start over, execute
+the following to purge the Ceph packages, and erase all its data and configuration::
+
+ ceph-deploy purge {ceph-node} [{ceph-node}]
+ ceph-deploy purgedata {ceph-node} [{ceph-node}]
+ ceph-deploy forgetkeys
+ rm ceph.*
+
+If you execute ``purge``, you must re-install Ceph. The last ``rm``
+command removes any files that were written out by ceph-deploy locally
+during a previous installation.
+
+
+Create a Cluster
+================
+
+On your admin node from the directory you created for holding your
+configuration details, perform the following steps using ``ceph-deploy``.
+
+#. Create the cluster. ::
+
+ ceph-deploy new {initial-monitor-node(s)}
+
+ Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
+
+ ceph-deploy new node1
+
+ Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
+ current directory. You should see a Ceph configuration file
+ (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
+ and a log file for the new cluster. See `ceph-deploy new -h`_ for
+ additional details.
+
+ Note for users of Ubuntu 18.04: Python 2 is a prerequisite of Ceph.
+ Install the ``python-minimal`` package on Ubuntu 18.04 to provide
+ Python 2::
+
+ [Ubuntu 18.04] $ sudo apt install python-minimal
+
+#. If you have more than one network interface, add the ``public network``
+ setting under the ``[global]`` section of your Ceph configuration file.
+ See the `Network Configuration Reference`_ for details. ::
+
+ public network = {ip-address}/{bits}
+
+ For example,::
+
+ public network = 10.1.2.0/24
+
+ to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
+
+#. If you are deploying in an IPv6 environment, add the following to
+ ``ceph.conf`` in the local directory::
+
+ echo ms bind ipv6 = true >> ceph.conf
+
+#. Install Ceph packages.::
+
+ ceph-deploy install {ceph-node} [...]
+
+ For example::
+
+ ceph-deploy install node1 node2 node3
+
+ The ``ceph-deploy`` utility will install Ceph on each node.
+
+#. Deploy the initial monitor(s) and gather the keys::
+
+ ceph-deploy mon create-initial
+
+ Once you complete the process, your local directory should have the following
+ keyrings:
+
+ - ``ceph.client.admin.keyring``
+ - ``ceph.bootstrap-mgr.keyring``
+ - ``ceph.bootstrap-osd.keyring``
+ - ``ceph.bootstrap-mds.keyring``
+ - ``ceph.bootstrap-rgw.keyring``
+ - ``ceph.bootstrap-rbd.keyring``
+ - ``ceph.bootstrap-rbd-mirror.keyring``
+
+ .. note:: If this process fails with a message similar to "Unable to
+ find /etc/ceph/ceph.client.admin.keyring", please ensure that the
+ IP listed for the monitor node in ceph.conf is the Public IP, not
+ the Private IP.
+
+#. Use ``ceph-deploy`` to copy the configuration file and admin key to
+ your admin node and your Ceph Nodes so that you can use the ``ceph``
+ CLI without having to specify the monitor address and
+ ``ceph.client.admin.keyring`` each time you execute a command. ::
+
+ ceph-deploy admin {ceph-node(s)}
+
+ For example::
+
+ ceph-deploy admin node1 node2 node3
+
+#. Deploy a manager daemon. (Required only for luminous+ builds)::
+
+ ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
+
+#. Add three OSDs. For the purposes of these instructions, we assume you have an
+ unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ::
+
+ ceph-deploy osd create --data {device} {ceph-node}
+
+ For example::
+
+ ceph-deploy osd create --data /dev/vdb node1
+ ceph-deploy osd create --data /dev/vdb node2
+ ceph-deploy osd create --data /dev/vdb node3
+
+ .. note:: If you are creating an OSD on an LVM volume, the argument to
+ ``--data`` *must* be ``volume_group/lv_name``, rather than the path to
+ the volume's block device.
+
+#. Check your cluster's health. ::
+
+ ssh node1 sudo ceph health
+
+ Your cluster should report ``HEALTH_OK``. You can view a more complete
+ cluster status with::
+
+ ssh node1 sudo ceph -s
+
+
+Expanding Your Cluster
+======================
+
+Once you have a basic cluster up and running, the next step is to expand
+cluster. Then add a Ceph Monitor and Ceph Manager to ``node2`` and ``node3``
+to improve reliability and availability.
+
+.. ditaa::
+ /------------------\ /----------------\
+ | ceph-deploy | | node1 |
+ | Admin Node | | cCCC |
+ | +-------->+ |
+ | | | mon.node1 |
+ | | | osd.0 |
+ | | | mgr.node1 |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ | | cCCC |
+ +----------------->+ |
+ | | osd.1 |
+ | | mon.node2 |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ | | cCCC |
+ +----------------->+ |
+ | osd.2 |
+ | mon.node3 |
+ \----------------/
+
+Adding Monitors
+---------------
+
+A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
+Manager to run. For high availability, Ceph Storage Clusters typically
+run multiple Ceph Monitors so that the failure of a single Ceph
+Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
+Paxos algorithm, which requires a majority of monitors (i.e., greater
+than *N/2* where *N* is the number of monitors) to form a quorum.
+Odd numbers of monitors tend to be better, although this is not required.
+
+.. tip: If you did not define the ``public network`` option above then
+ the new monitor will not know which IP address to bind to on the
+ new hosts. You can add this line to your ``ceph.conf`` by editing
+ it now and then push it out to each node with
+ ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
+
+Add two Ceph Monitors to your cluster::
+
+ ceph-deploy mon add {ceph-nodes}
+
+For example::
+
+ ceph-deploy mon add node2 node3
+
+Once you have added your new Ceph Monitors, Ceph will begin synchronizing
+the monitors and form a quorum. You can check the quorum status by executing
+the following::
+
+ ceph quorum_status --format json-pretty
+
+
+.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
+ configure NTP on each monitor host. Ensure that the
+ monitors are NTP peers.
+
+Adding Managers
+---------------
+
+The Ceph Manager daemons operate in an active/standby pattern. Deploying
+additional manager daemons ensures that if one daemon or host fails, another
+one can take over without interrupting service.
+
+To deploy additional manager daemons::
+
+ ceph-deploy mgr create node2 node3
+
+You should see the standby managers in the output from::
+
+ ssh node1 sudo ceph -s
+
+
+Add an RGW Instance
+-------------------
+
+To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
+instance of :term:`RGW`. Execute the following to create an new instance of
+RGW::
+
+ ceph-deploy rgw create {gateway-node}
+
+For example::
+
+ ceph-deploy rgw create node1
+
+By default, the :term:`RGW` instance will listen on port 7480. This can be
+changed by editing ceph.conf on the node running the :term:`RGW` as follows:
+
+.. code-block:: ini
+
+ [client]
+ rgw frontends = civetweb port=80
+
+To use an IPv6 address, use:
+
+.. code-block:: ini
+
+ [client]
+ rgw frontends = civetweb port=[::]:80
+
+
+
+Storing/Retrieving Object Data
+==============================
+
+To store object data in the Ceph Storage Cluster, a Ceph client must:
+
+#. Set an object name
+#. Specify a `pool`_
+
+The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
+calculates how to map the object to a `placement group`_, and then calculates
+how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
+object location, all you need is the object name and the pool name. For
+example::
+
+ ceph osd map {poolname} {object-name}
+
+.. topic:: Exercise: Locate an Object
+
+ As an exercise, lets create an object. Specify an object name, a path to
+ a test file containing some object data and a pool name using the
+ ``rados put`` command on the command line. For example::
+
+ echo {Test-data} > testfile.txt
+ ceph osd pool create mytest
+ rados put {object-name} {file-path} --pool=mytest
+ rados put test-object-1 testfile.txt --pool=mytest
+
+ To verify that the Ceph Storage Cluster stored the object, execute
+ the following::
+
+ rados -p mytest ls
+
+ Now, identify the object location::
+
+ ceph osd map {pool-name} {object-name}
+ ceph osd map mytest test-object-1
+
+ Ceph should output the object's location. For example::
+
+ osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
+
+ To remove the test object, simply delete it using the ``rados rm``
+ command.
+
+ For example::
+
+ rados rm test-object-1 --pool=mytest
+
+ To delete the ``mytest`` pool::
+
+ ceph osd pool rm mytest
+
+ (For safety reasons you will need to supply additional arguments as
+ prompted; deleting pools destroys data.)
+
+As the cluster evolves, the object location may change dynamically. One benefit
+of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
+data migration or balancing manually.
+
+
+.. _Preflight Checklist: ../quick-start-preflight
+.. _Ceph Deploy: ../../rados/deployment
+.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
+.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
+.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
+.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
+.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
+.. _CRUSH Map: ../../rados/operations/crush-map
+.. _pool: ../../rados/operations/pools
+.. _placement group: ../../rados/operations/placement-groups
+.. _Monitoring a Cluster: ../../rados/operations/monitoring
+.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _User Management: ../../rados/operations/user-management
diff --git a/doc/install/ceph-deploy/quick-cephfs.rst b/doc/install/ceph-deploy/quick-cephfs.rst
new file mode 100644
index 00000000000..e8ca28f86ee
--- /dev/null
+++ b/doc/install/ceph-deploy/quick-cephfs.rst
@@ -0,0 +1,212 @@
+===================
+ CephFS Quick Start
+===================
+
+To use the :term:`CephFS` Quick Start guide, you must have executed the
+procedures in the `Storage Cluster Quick Start`_ guide first. Execute this
+quick start on the admin host.
+
+Prerequisites
+=============
+
+#. Verify that you have an appropriate version of the Linux kernel.
+ See `OS Recommendations`_ for details. ::
+
+ lsb_release -a
+ uname -r
+
+#. On the admin node, use ``ceph-deploy`` to install Ceph on your
+ ``ceph-client`` node. ::
+
+ ceph-deploy install ceph-client
+
+#. Optionally, if you want a FUSE-mounted file system, you would need to
+ install ``ceph-fuse`` package as well.
+
+#. Ensure that the :term:`Ceph Storage Cluster` is running and in an ``active +
+ clean`` state. ::
+
+ ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]
+
+
+Deploy Metadata Server
+======================
+
+All metadata operations in CephFS happen via a metadata server, so you need at
+least one metadata server. Execute the following to create a metadata server::
+
+ ceph-deploy mds create {ceph-node}
+
+For example::
+
+ ceph-deploy mds create node1
+
+Now, your Ceph cluster would look like this:
+
+.. ditaa::
+ /------------------\ /----------------\
+ | ceph-deploy | | node1 |
+ | Admin Node | | cCCC |
+ | +-------->+ mon.node1 |
+ | | | osd.0 |
+ | | | mgr.node1 |
+ | | | mds.node1 |
+ \---------+--------/ \----------------/
+ |
+ | /----------------\
+ | | node2 |
+ | | cCCC |
+ +----------------->+ |
+ | | osd.1 |
+ | | mon.node2 |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ | | cCCC |
+ +----------------->+ |
+ | osd.2 |
+ | mon.node3 |
+ \----------------/
+
+Create a File System
+====================
+
+You have already created an MDS (`Storage Cluster Quick Start`_) but it will not
+become active until you create some pools and a file system. See
+:doc:`/cephfs/createfs`. ::
+
+ ceph osd pool create cephfs_data 32
+ ceph osd pool create cephfs_meta 32
+ ceph fs new mycephfs cephfs_meta cephfs_data
+
+.. note:: In case you have multiple Ceph applications and/or have multiple
+ CephFSs on the same cluster, it would be easier to name your pools as
+ <application>.<fs-name>.<pool-name>. In that case, the above pools would
+ be named as cephfs.mycehfs.data and cephfs.mycehfs.meta.
+
+Quick word about Pools and PGs
+------------------------------
+
+Replication Number/Pool Size
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Since the default replication number/size is 3, you'd need 3 OSDs to get
+``active+clean`` for all PGs. Alternatively, you may change the replication
+number for the pool to match the number of OSDs::
+
+ ceph osd pool set cephfs_data size {number-of-osds}
+ ceph osd pool set cephfs_meta size {number-of-osds}
+
+Usually, setting ``pg_num`` to 32 gives a perfectly healthy cluster. To pick
+appropriate value for ``pg_num``, refer `Placement Group`_. You can also use
+pg_autoscaler plugin instead. Introduced by Nautilus release, it can
+automatically increase/decrease value of ``pg_num``; refer the
+`Placement Group`_ to find out more about it.
+
+When all OSDs are on the same node...
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+And, in case you have deployed all of the OSDs on the same node, you would need
+to create a new CRUSH rule to replicate data across OSDs and set the rule on the
+CephFS pools, since the default CRUSH rule is to replicate data across
+different nodes::
+
+ ceph osd crush rule create-replicated rule_foo default osd
+ ceph osd pool set cephfs_data crush_rule rule_foo
+ ceph osd pool set cephfs_meta crush_rule rule_foo
+
+Using Erasure Coded pools
+^^^^^^^^^^^^^^^^^^^^^^^^^
+You may also use Erasure Coded pools which can be more effecient and
+cost-saving since they allow stripping object data across OSDs and
+replicating these stripes with encoded redundancy information. The number
+of OSDs across which the data is stripped is `k` and number of replica is `m`.
+You'll need to pick up these values before creating CephFS pools. The
+following commands create a erasure code profile, creates a pool that'll
+use it and then enables it on the pool::
+
+ ceph osd erasure-code-profile set ec-42-profile k=4 m=2 crush-failure-domain=host crush-device-class=ssd
+ ceph osd pool create cephfs_data_ec42 64 erasure ec-42-profile
+ ceph osd pool set cephfs_data_ec42 allow_ec_overwrites true
+ ceph fs add_data_pool mycephfs cephfs_data_ec42
+
+You can also mark directories so that they are only stored on certain pools::
+
+ setfattr -n ceph.dir.layout -v pool=cephfs_data_ec42 /mnt/mycephfs/logs
+
+This way you can choose the replication strategy for each directory on your
+Ceph file system.
+
+.. note:: Erasure Coded pools can not be used for CephFS metadata pools.
+
+Erasure coded pool were introduced in Firefly and could be used directly by
+CephFS Luminous onwards. Refer `this article <https://ceph.io/community/new-luminous-erasure-coding-rbd-cephfs/>`_
+by Sage Weil to understand EC, it's background, limitations and other details
+in Ceph's context. Read more about `Erasure Code`_ here.
+
+Mounting the File System
+========================
+
+Using Kernel Driver
+-------------------
+
+The command to mount CephFS using kernel driver looks like this::
+
+ sudo mount -t ceph :{path-to-mounted} {mount-point} -o name={user-name}
+ sudo mount -t ceph :/ /mnt/mycephfs -o name=admin # usable version
+
+``{path-to-be-mounted}`` is the path within CephFS that will be mounted,
+``{mount-point}`` is the point in your file system upon which CephFS will be
+mounted and ``{user-name}`` is the name of CephX user that has the
+authorization to mount CephFS on the machine. Following command is the
+extended form, however these extra details are automatically figured out by
+by the mount.ceph helper program::
+
+ sudo mount -t ceph {ip-address-of-MON}:{port-number-of-MON}:{path-to-be-mounted} -o name={user-name},secret={secret-key} {mount-point}
+
+If you have multiple file systems on your cluster you would need to pass
+``fs={fs-name}`` option to ``-o`` option to the ``mount`` command::
+
+ sudo mount -t ceph :/ /mnt/kcephfs2 -o name=admin,fs=mycephfs2
+
+Refer `mount.ceph man page`_ and `Mount CephFS using Kernel Driver`_ to read
+more about this.
+
+
+Using FUSE
+----------
+
+To mount CephFS using FUSE (Filesystem in User Space) run::
+
+ sudo ceph-fuse /mnt/mycephfs
+
+To mount a particular directory within CephFS you can use ``-r``::
+
+ sudo ceph-fuse -r {path-to-be-mounted} /mnt/mycephfs
+
+If you have multiple file systems on your cluster you would need to pass
+``--client_fs {fs-name}`` to the ``ceph-fuse`` command::
+
+ sudo ceph-fuse /mnt/mycephfs2 --client_fs mycephfs2
+
+Refer `ceph-fuse man page`_ and `Mount CephFS using FUSE`_ to read more about
+this.
+
+.. note:: Mount the CephFS file system on the admin node, not the server node.
+
+
+Additional Information
+======================
+
+See `CephFS`_ for additional information. See `Troubleshooting`_ if you
+encounter trouble.
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _CephFS: ../../cephfs/
+.. _Troubleshooting: ../../cephfs/troubleshooting
+.. _OS Recommendations: ../os-recommendations
+.. _Placement Group: ../../rados/operations/placement-groups
+.. _mount.ceph man page: ../../man/8/mount.ceph
+.. _Mount CephFS using Kernel Driver: ../cephfs/kernel
+.. _ceph-fuse man page: ../../man/8/ceph-fuse
+.. _Mount CephFS using FUSE: ../../cephfs/fuse
+.. _Erasure Code: ../../rados/operations/erasure-code
diff --git a/doc/install/ceph-deploy/quick-common.rst b/doc/install/ceph-deploy/quick-common.rst
new file mode 100644
index 00000000000..915a7b88642
--- /dev/null
+++ b/doc/install/ceph-deploy/quick-common.rst
@@ -0,0 +1,20 @@
+.. ditaa::
+ /------------------\ /-----------------\
+ | admin-node | | node1 |
+ | +-------->+ cCCC |
+ | ceph-deploy | | mon.node1 |
+ | | | osd.0 |
+ \---------+--------/ \-----------------/
+ |
+ | /----------------\
+ | | node2 |
+ +----------------->+ cCCC |
+ | | osd.1 |
+ | \----------------/
+ |
+ | /----------------\
+ | | node3 |
+ +----------------->| cCCC |
+ | osd.2 |
+ \----------------/
+
diff --git a/doc/install/ceph-deploy/quick-rgw-old.rst b/doc/install/ceph-deploy/quick-rgw-old.rst
new file mode 100644
index 00000000000..db6474de514
--- /dev/null
+++ b/doc/install/ceph-deploy/quick-rgw-old.rst
@@ -0,0 +1,30 @@
+:orphan:
+
+===========================
+ Quick Ceph Object Storage
+===========================
+
+To use the :term:`Ceph Object Storage` Quick Start guide, you must have executed the
+procedures in the `Storage Cluster Quick Start`_ guide first. Make sure that you
+have at least one :term:`RGW` instance running.
+
+Configure new RGW instance
+==========================
+
+The :term:`RGW` instance created by the `Storage Cluster Quick Start`_ will run using
+the embedded CivetWeb webserver. ``ceph-deploy`` will create the instance and start
+it automatically with default parameters.
+
+To administer the :term:`RGW` instance, see details in the the
+`RGW Admin Guide`_.
+
+Additional details may be found in the `Configuring Ceph Object Gateway`_ guide, but
+the steps specific to Apache are no longer needed.
+
+.. note:: Deploying RGW using ``ceph-deploy`` and using the CivetWeb webserver instead
+ of Apache is new functionality as of **Hammer** release.
+
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _RGW Admin Guide: ../../radosgw/admin
+.. _Configuring Ceph Object Gateway: ../../radosgw/config-fcgi
diff --git a/doc/install/ceph-deploy/quick-rgw.rst b/doc/install/ceph-deploy/quick-rgw.rst
new file mode 100644
index 00000000000..5efda04f9ba
--- /dev/null
+++ b/doc/install/ceph-deploy/quick-rgw.rst
@@ -0,0 +1,101 @@
+===============================
+Ceph Object Gateway Quick Start
+===============================
+
+As of `firefly` (v0.80), Ceph Storage dramatically simplifies installing and
+configuring a Ceph Object Gateway. The Gateway daemon embeds Civetweb, so you
+do not have to install a web server or configure FastCGI. Additionally,
+``ceph-deploy`` can install the gateway package, generate a key, configure a
+data directory and create a gateway instance for you.
+
+.. tip:: Civetweb uses port ``7480`` by default. You must either open port
+ ``7480``, or set the port to a preferred port (e.g., port ``80``) in your Ceph
+ configuration file.
+
+To start a Ceph Object Gateway, follow the steps below:
+
+Installing Ceph Object Gateway
+==============================
+
+#. Execute the pre-installation steps on your ``client-node``. If you intend to
+ use Civetweb's default port ``7480``, you must open it using either
+ ``firewall-cmd`` or ``iptables``. See `Preflight Checklist`_ for more
+ information.
+
+#. From the working directory of your administration server, install the Ceph
+ Object Gateway package on the ``client-node`` node. For example::
+
+ ceph-deploy install --rgw <client-node> [<client-node> ...]
+
+Creating the Ceph Object Gateway Instance
+=========================================
+
+From the working directory of your administration server, create an instance of
+the Ceph Object Gateway on the ``client-node``. For example::
+
+ ceph-deploy rgw create <client-node>
+
+Once the gateway is running, you should be able to access it on port ``7480``.
+(e.g., ``http://client-node:7480``).
+
+Configuring the Ceph Object Gateway Instance
+============================================
+
+#. To change the default port (e.g., to port ``80``), modify your Ceph
+ configuration file. Add a section entitled ``[client.rgw.<client-node>]``,
+ replacing ``<client-node>`` with the short node name of your Ceph client
+ node (i.e., ``hostname -s``). For example, if your node name is
+ ``client-node``, add a section like this after the ``[global]`` section::
+
+ [client.rgw.client-node]
+ rgw_frontends = "civetweb port=80"
+
+ .. note:: Ensure that you leave no whitespace between ``port=<port-number>``
+ in the ``rgw_frontends`` key/value pair.
+
+ .. important:: If you intend to use port 80, make sure that the Apache
+ server is not running otherwise it will conflict with Civetweb. We recommend
+ to remove Apache in this case.
+
+#. To make the new port setting take effect, restart the Ceph Object Gateway.
+ On Red Hat Enterprise Linux 7 and Fedora, run the following command::
+
+ sudo systemctl restart ceph-radosgw.service
+
+ On Red Hat Enterprise Linux 6 and Ubuntu, run the following command::
+
+ sudo service radosgw restart id=rgw.<short-hostname>
+
+#. Finally, check to ensure that the port you selected is open on the node's
+ firewall (e.g., port ``80``). If it is not open, add the port and reload the
+ firewall configuration. For example::
+
+ sudo firewall-cmd --list-all
+ sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
+ sudo firewall-cmd --reload
+
+ See `Preflight Checklist`_ for more information on configuring firewall with
+ ``firewall-cmd`` or ``iptables``.
+
+ You should be able to make an unauthenticated request, and receive a
+ response. For example, a request with no parameters like this::
+
+ http://<client-node>:80
+
+ Should result in a response like this::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <Owner>
+ <ID>anonymous</ID>
+ <DisplayName></DisplayName>
+ </Owner>
+ <Buckets>
+ </Buckets>
+ </ListAllMyBucketsResult>
+
+See the `Configuring Ceph Object Gateway`_ guide for additional administration
+and API details.
+
+.. _Configuring Ceph Object Gateway: ../../radosgw/config-ref
+.. _Preflight Checklist: ../quick-start-preflight
diff --git a/doc/install/ceph-deploy/quick-start-preflight.rst b/doc/install/ceph-deploy/quick-start-preflight.rst
new file mode 100644
index 00000000000..b1fdc92d228
--- /dev/null
+++ b/doc/install/ceph-deploy/quick-start-preflight.rst
@@ -0,0 +1,364 @@
+=====================
+ Preflight Checklist
+=====================
+
+The ``ceph-deploy`` tool operates out of a directory on an admin
+:term:`node`. Any host with network connectivity and a modern python
+environment and ssh (such as Linux) should work.
+
+In the descriptions below, :term:`Node` refers to a single machine.
+
+.. include:: quick-common.rst
+
+
+Ceph-deploy Setup
+=================
+
+Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
+``ceph-deploy``.
+
+Debian/Ubuntu
+-------------
+
+For Debian and Ubuntu distributions, perform the following steps:
+
+#. Add the release key::
+
+ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
+
+#. Add the Ceph packages to your repository. Use the command below and
+ replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
+ ``luminous``.) For example::
+
+ echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
+
+#. Update your repository and install ``ceph-deploy``::
+
+ sudo apt update
+ sudo apt install ceph-deploy
+
+.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
+
+
+RHEL/CentOS
+-----------
+
+For CentOS 7, perform the following steps:
+
+#. On Red Hat Enterprise Linux 7, register the target machine with
+ ``subscription-manager``, verify your subscriptions, and enable the
+ "Extras" repository for package dependencies. For example::
+
+ sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
+
+#. Install and enable the Extra Packages for Enterprise Linux (EPEL)
+ repository::
+
+ sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+
+ Please see the `EPEL wiki`_ page for more information.
+
+#. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
+ ``luminous``.) For example::
+
+ cat << EOM > /etc/yum.repos.d/ceph.repo
+ [ceph-noarch]
+ name=Ceph noarch packages
+ baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
+ enabled=1
+ gpgcheck=1
+ type=rpm-md
+ gpgkey=https://download.ceph.com/keys/release.asc
+ EOM
+
+#. You may need to install python setuptools required by ceph-deploy:
+
+ sudo yum install python-setuptools
+
+#. Update your repository and install ``ceph-deploy``::
+
+ sudo yum update
+ sudo yum install ceph-deploy
+
+.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
+
+
+openSUSE
+--------
+
+The Ceph project does not currently publish release RPMs for openSUSE, but
+a stable version of Ceph is included in the default update repository, so
+installing it is just a matter of::
+
+ sudo zypper install ceph
+ sudo zypper install ceph-deploy
+
+If the distro version is out-of-date, open a bug at
+https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
+the following repositories:
+
+#. Hammer::
+
+ https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
+
+#. Jewel::
+
+ https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
+
+
+Ceph Node Setup
+===============
+
+The admin node must have password-less SSH access to Ceph nodes.
+When ceph-deploy logs in to a Ceph node as a user, that particular
+user must have passwordless ``sudo`` privileges.
+
+
+Install NTP
+-----------
+
+We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
+prevent issues arising from clock drift. See `Clock`_ for details.
+
+On CentOS / RHEL, execute::
+
+ sudo yum install ntp ntpdate ntp-doc
+
+On Debian / Ubuntu, execute::
+
+ sudo apt install ntpsec
+
+or::
+
+ sudo apt install chrony
+
+Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
+same NTP time server. See `NTP`_ for details.
+
+
+Install SSH Server
+------------------
+
+For **ALL** Ceph Nodes perform the following steps:
+
+#. Install an SSH server (if necessary) on each Ceph Node::
+
+ sudo apt install openssh-server
+
+ or::
+
+ sudo yum install openssh-server
+
+
+#. Ensure the SSH server is running on **ALL** Ceph Nodes.
+
+
+Create a Ceph Deploy User
+-------------------------
+
+The ``ceph-deploy`` utility must login to a Ceph node as a user
+that has passwordless ``sudo`` privileges, because it needs to install
+software and configuration files without prompting for passwords.
+
+Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
+specify any user that has password-less ``sudo`` (including ``root``, although
+this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
+user you specify must have password-less SSH access to the Ceph node, as
+``ceph-deploy`` will not prompt you for a password.
+
+We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
+in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
+name across the cluster may improve ease of use (not required), but you should
+avoid obvious user names, because hackers typically use them with brute force
+hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
+substituting ``{username}`` for the user name you define, describes how to
+create a user with passwordless ``sudo``.
+
+.. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
+ for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
+ removing the user must be done before attempting an upgrade.
+
+#. Create a new user on each Ceph Node. ::
+
+ ssh user@ceph-server
+ sudo useradd -d /home/{username} -m {username}
+ sudo passwd {username}
+
+#. For the new user you added to each Ceph node, ensure that the user has
+ ``sudo`` privileges. ::
+
+ echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
+ sudo chmod 0440 /etc/sudoers.d/{username}
+
+
+Enable Password-less SSH
+------------------------
+
+Since ``ceph-deploy`` will not prompt for a password, you must generate
+SSH keys on the admin node and distribute the public key to each Ceph
+node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
+monitors.
+
+#. Generate the SSH keys, but do not use ``sudo`` or the
+ ``root`` user. Leave the passphrase empty::
+
+ ssh-keygen
+
+ Generating public/private key pair.
+ Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
+ Enter passphrase (empty for no passphrase):
+ Enter same passphrase again:
+ Your identification has been saved in /ceph-admin/.ssh/id_rsa.
+ Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
+
+#. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
+ you created with `Create a Ceph Deploy User`_. ::
+
+ ssh-copy-id {username}@node1
+ ssh-copy-id {username}@node2
+ ssh-copy-id {username}@node3
+
+#. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
+ admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
+ created without requiring you to specify ``--username {username}`` each
+ time you execute ``ceph-deploy``. This has the added benefit of streamlining
+ ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
+ created::
+
+ Host node1
+ Hostname node1
+ User {username}
+ Host node2
+ Hostname node2
+ User {username}
+ Host node3
+ Hostname node3
+ User {username}
+
+
+Enable Networking On Bootup
+---------------------------
+
+Ceph OSDs peer with each other and report to Ceph Monitors over the network.
+If networking is ``off`` by default, the Ceph cluster cannot come online
+during bootup until you enable networking.
+
+The default configuration on some distributions (e.g., CentOS) has the
+networking interface(s) off by default. Ensure that, during boot up, your
+network interface(s) turn(s) on so that your Ceph daemons can communicate over
+the network. For example, on Red Hat and CentOS, navigate to
+``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
+has ``ONBOOT`` set to ``yes``.
+
+
+Ensure Connectivity
+-------------------
+
+Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
+Address hostname resolution issues as necessary.
+
+.. note:: Hostnames should resolve to a network IP address, not to the
+ loopback IP address (e.g., hostnames should resolve to an IP address other
+ than ``127.0.0.1``). If you use your admin node as a Ceph node, you
+ should also ensure that it resolves to its hostname and IP address
+ (i.e., not its loopback IP address).
+
+
+Open Required Ports
+-------------------
+
+Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
+in a port range of ``6800:7300`` by default. See the `Network Configuration
+Reference`_ for details. Ceph OSDs can use multiple network connections to
+communicate with clients, monitors, other OSDs for replication, and other OSDs
+for heartbeats.
+
+On some distributions (e.g., RHEL), the default firewall configuration is fairly
+strict. You may need to adjust your firewall settings allow inbound requests so
+that clients in your network can communicate with daemons on your Ceph nodes.
+
+For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
+nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
+ensure that you make the settings permanent so that they are enabled on reboot.
+
+For example, on monitors::
+
+ sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
+
+and on OSDs and MDSs::
+
+ sudo firewall-cmd --zone=public --add-service=ceph --permanent
+
+Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
+
+ sudo firewall-cmd --reload
+
+For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
+for Ceph OSDs. For example::
+
+ sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
+
+Once you have finished configuring ``iptables``, ensure that you make the
+changes persistent on each node so that they will be in effect when your nodes
+reboot. For example::
+
+ /sbin/service iptables save
+
+TTY
+---
+
+On CentOS and RHEL, you may receive an error while trying to execute
+``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
+nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
+requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
+out to ensure that ``ceph-deploy`` can connect using the user you created with
+`Create a Ceph Deploy User`_.
+
+.. note:: If editing, ``/etc/sudoers``, ensure that you use
+ ``sudo visudo`` rather than a text editor.
+
+
+SELinux
+-------
+
+On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
+installation, we recommend setting SELinux to ``Permissive`` or disabling it
+entirely and ensuring that your installation and cluster are working properly
+before hardening your configuration. To set SELinux to ``Permissive``, execute the
+following::
+
+ sudo setenforce 0
+
+To configure SELinux persistently (recommended if SELinux is an issue), modify
+the configuration file at ``/etc/selinux/config``.
+
+
+Priorities/Preferences
+----------------------
+
+Ensure that your package manager has priority/preferences packages installed and
+enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
+enable optional repositories. ::
+
+ sudo yum install yum-plugin-priorities
+
+For example, on RHEL 7 server, execute the following to install
+``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
+repository::
+
+ sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
+
+
+Summary
+=======
+
+This completes the Quick Start Preflight. Proceed to the `Storage Cluster
+Quick Start`_.
+
+.. _Storage Cluster Quick Start: ../quick-ceph-deploy
+.. _OS Recommendations: ../os-recommendations
+.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
+.. _Clock: ../../rados/configuration/mon-config-ref#clock
+.. _NTP: http://www.ntp.org/
+.. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
+.. _EPEL wiki: https://fedoraproject.org/wiki/EPEL
diff --git a/doc/install/ceph-deploy/upgrading-ceph.rst b/doc/install/ceph-deploy/upgrading-ceph.rst
new file mode 100644
index 00000000000..6fbf43a236d
--- /dev/null
+++ b/doc/install/ceph-deploy/upgrading-ceph.rst
@@ -0,0 +1,235 @@
+================
+ Upgrading Ceph
+================
+
+Each release of Ceph may have additional steps. Refer to the `release notes
+document of your release`_ to identify release-specific procedures for your
+cluster before using the upgrade procedures.
+
+
+Summary
+=======
+
+You can upgrade daemons in your Ceph cluster while the cluster is online and in
+service! Certain types of daemons depend upon others. For example, Ceph Metadata
+Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
+We recommend upgrading in this order:
+
+#. `Ceph Deploy`_
+#. Ceph Monitors
+#. Ceph OSD Daemons
+#. Ceph Metadata Servers
+#. Ceph Object Gateways
+
+As a general rule, we recommend upgrading all the daemons of a specific type
+(e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
+they are all on the same release. We also recommend that you upgrade all the
+daemons in your cluster before you try to exercise new functionality in a
+release.
+
+The `Upgrade Procedures`_ are relatively simple, but do look at the `release
+notes document of your release`_ before upgrading. The basic process involves
+three steps:
+
+#. Use ``ceph-deploy`` on your admin node to upgrade the packages for
+ multiple hosts (using the ``ceph-deploy install`` command), or login to each
+ host and upgrade the Ceph package `using your distro's package manager`_.
+ For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might
+ look like this::
+
+ ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
+ ceph-deploy install --release firefly mon1 mon2 mon3
+
+ **Note:** The ``ceph-deploy install`` command will upgrade the packages
+ in the specified node(s) from the old release to the release you specify.
+ There is no ``ceph-deploy upgrade`` command.
+
+#. Login in to each Ceph node and restart each Ceph daemon.
+ See `Operating a Cluster`_ for details.
+
+#. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.
+
+.. important:: Once you upgrade a daemon, you cannot downgrade it.
+
+
+Ceph Deploy
+===========
+
+Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
+
+ sudo pip install -U ceph-deploy
+
+Or::
+
+ sudo apt-get install ceph-deploy
+
+Or::
+
+ sudo yum install ceph-deploy python-pushy
+
+
+Upgrade Procedures
+==================
+
+The following sections describe the upgrade process.
+
+.. important:: Each release of Ceph may have some additional steps. Refer to
+ the `release notes document of your release`_ for details **BEFORE** you
+ begin upgrading daemons.
+
+
+Upgrading Monitors
+------------------
+
+To upgrade monitors, perform the following steps:
+
+#. Upgrade the Ceph package for each daemon instance.
+
+ You may use ``ceph-deploy`` to address all monitor nodes at once.
+ For example::
+
+ ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
+ ceph-deploy install --release hammer mon1 mon2 mon3
+
+ You may also use the package manager for your Linux distribution on
+ each individual node. To upgrade packages manually on each Debian/Ubuntu
+ host, perform the following steps::
+
+ ssh {mon-host}
+ sudo apt-get update && sudo apt-get install ceph
+
+ On CentOS/Red Hat hosts, perform the following steps::
+
+ ssh {mon-host}
+ sudo yum update && sudo yum install ceph
+
+
+#. Restart each monitor. For Ubuntu distributions, use::
+
+ sudo systemctl restart ceph-mon@{hostname}.service
+
+ For CentOS/Red Hat/Debian distributions, use::
+
+ sudo /etc/init.d/ceph restart {mon-id}
+
+ For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
+ the monitor ID is usually ``mon.{hostname}``.
+
+#. Ensure each monitor has rejoined the quorum::
+
+ ceph mon stat
+
+Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.
+
+
+Upgrading an OSD
+----------------
+
+To upgrade a Ceph OSD Daemon, perform the following steps:
+
+#. Upgrade the Ceph OSD Daemon package.
+
+ You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
+ once. For example::
+
+ ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
+ ceph-deploy install --release hammer osd1 osd2 osd3
+
+ You may also use the package manager on each node to upgrade packages
+ `using your distro's package manager`_. For Debian/Ubuntu hosts, perform the
+ following steps on each host::
+
+ ssh {osd-host}
+ sudo apt-get update && sudo apt-get install ceph
+
+ For CentOS/Red Hat hosts, perform the following steps::
+
+ ssh {osd-host}
+ sudo yum update && sudo yum install ceph
+
+
+#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
+
+ sudo systemctl restart ceph-osd@{N}.service
+
+ For multiple OSDs on a host, you may restart all of them with systemd. ::
+
+ sudo systemctl restart ceph-osd
+
+ For CentOS/Red Hat/Debian distributions, use::
+
+ sudo /etc/init.d/ceph restart N
+
+
+#. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
+
+ ceph osd stat
+
+Ensure that you have completed the upgrade cycle for all of your
+Ceph OSD Daemons.
+
+
+Upgrading a Metadata Server
+---------------------------
+
+To upgrade a Ceph Metadata Server, perform the following steps:
+
+#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
+ address all Ceph Metadata Server nodes at once, or use the package manager
+ on each node. For example::
+
+ ceph-deploy install --release {release-name} ceph-node1
+ ceph-deploy install --release hammer mds1
+
+ To upgrade packages manually, perform the following steps on each
+ Debian/Ubuntu host::
+
+ ssh {mon-host}
+ sudo apt-get update && sudo apt-get install ceph-mds
+
+ Or the following steps on CentOS/Red Hat hosts::
+
+ ssh {mon-host}
+ sudo yum update && sudo yum install ceph-mds
+
+
+#. Restart the metadata server. For Ubuntu, use::
+
+ sudo systemctl restart ceph-mds@{hostname}.service
+
+ For CentOS/Red Hat/Debian distributions, use::
+
+ sudo /etc/init.d/ceph restart mds.{hostname}
+
+ For clusters deployed with ``ceph-deploy``, the name is usually either
+ the name you specified on creation or the hostname.
+
+#. Ensure the metadata server is up and running::
+
+ ceph mds stat
+
+
+Upgrading a Client
+------------------
+
+Once you have upgraded the packages and restarted daemons on your Ceph
+cluster, we recommend upgrading ``ceph-common`` and client libraries
+(``librbd1`` and ``librados2``) on your client nodes too.
+
+#. Upgrade the package::
+
+ ssh {client-host}
+ apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd
+
+#. Ensure that you have the latest version::
+
+ ceph --version
+
+If you do not have the latest version, you may need to uninstall, auto remove
+dependencies and reinstall.
+
+
+.. _using your distro's package manager: ../install-storage-cluster/
+.. _Operating a Cluster: ../../rados/operations/operating
+.. _Monitoring a Cluster: ../../rados/operations/monitoring
+.. _release notes document of your release: ../../releases