summaryrefslogtreecommitdiffstats
path: root/doc/rbd/rbd-openstack.rst
diff options
context:
space:
mode:
authorSébastien Han <sebastien.han@enovance.com>2015-06-03 19:37:06 +0200
committerSébastien Han <seb@redhat.com>2015-06-16 11:22:14 +0200
commitd38cd63d26b5102af2d4dfd2cb4f94f77cf4f1be (patch)
treea7c30f2d48f5f173bd183de95645babebc8e0c38 /doc/rbd/rbd-openstack.rst
parentMerge pull request #4712 from dachary/wip-11663-verify-erasure-code-profile (diff)
downloadceph-d38cd63d26b5102af2d4dfd2cb4f94f77cf4f1be.tar.xz
ceph-d38cd63d26b5102af2d4dfd2cb4f94f77cf4f1be.zip
doc: update openstack and rgw keystone
Update the OpenStack doc with more options, recommendations and best practices. Update the Keystone configuration for the Kilo release with Rados Gateway. Signed-off-by: Sébastien Han <seb@redhat.com>
Diffstat (limited to 'doc/rbd/rbd-openstack.rst')
-rw-r--r--doc/rbd/rbd-openstack.rst41
1 files changed, 35 insertions, 6 deletions
diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst
index ca53856435e..db232b100e2 100644
--- a/doc/rbd/rbd-openstack.rst
+++ b/doc/rbd/rbd-openstack.rst
@@ -237,6 +237,16 @@ assuming your configuration file has ``flavor = keystone+cachemanagement``::
[paste_deploy]
flavor = keystone
+Image properties
+~~~~~~~~~~~~~~~~
+
+We recommend to use the following properties for your images:
+
+- ``hw_scsi_model=virtio-scsi``: add the virtio-scsi controller and get better performance and support for discard operation
+- ``hw_disk_bus=scsi``: connect every cinder block devices to that controller
+- ``hw_qemu_guest_agent=yes``: enable the QEMU guest agent
+- ``os_require_quiesce=yes``: send fs-freeze/thaw calls through the QEMU guest agent
+
Configuring Cinder
------------------
@@ -302,7 +312,7 @@ configure the ephemeral backend for Nova.
It is recommended to enable the RBD cache in your Ceph configuration file
(enabled by default since Giant). Moreover, enabling the admin socket
-brings a lot of benefits while troubleshoothing. Having one socket
+brings a lot of benefits while troubleshooting. Having one socket
per virtual machine using a Ceph block device will help investigating performance and/or wrong behaviors.
This socket can be accessed like this::
@@ -314,7 +324,17 @@ Now on every compute nodes edit your Ceph configuration file::
[client]
rbd cache = true
rbd cache writethrough until flush = true
- admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
+ admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
+ log file = /var/log/qemu/qemu-guest-$pid.log
+ rbd concurrent management ops = 20
+
+Configure the permissions of these paths::
+
+ mkdir -p /var/run/ceph/guests/ /var/log/qemu/
+ chown qemu:libvirtd /var/run/ceph/guests /var/log/qemu/
+
+Note that user ``qemu`` and group ``libvirtd`` can vary depending on your system.
+The provided example works for RedHat based systems.
.. tip:: If your virtual machine is already running you can simply restart it to get the socket
@@ -351,8 +371,7 @@ On every Compute node, edit ``/etc/nova/nova.conf`` and add::
To ensure a proper live-migration, use the following flags::
- libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
-
+ libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
Juno
~~~~
@@ -383,9 +402,19 @@ under the ``[libvirt]`` section::
inject_key = false
inject_partition = -2
-To ensure a proper live-migration, use the following flags::
+To ensure a proper live-migration, use the following flags (under the ``[libvirt]`` section)::
- live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
+ live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
+
+Kilo
+~~~~
+
+Enable discard support for virtual machine ephemeral root disk::
+
+ [libvirt]
+ ...
+ ...
+ hw_disk_discard = unmap # enable discard support (be careful of performance)
Restart OpenStack