diff options
author | Sage Weil <sage@redhat.com> | 2017-06-22 19:28:15 +0200 |
---|---|---|
committer | Sage Weil <sage@redhat.com> | 2017-06-23 16:36:18 +0200 |
commit | 829e767d497cc1cd4be35f0005a1a9a8dc6d7db8 (patch) | |
tree | a709b00e2bbf59a4fda7354a8d27a0c32f00048d /doc/release-notes.rst | |
parent | Merge pull request #15764 from tchaikov/wip-20342 (diff) | |
download | ceph-829e767d497cc1cd4be35f0005a1a9a8dc6d7db8.tar.xz ceph-829e767d497cc1cd4be35f0005a1a9a8dc6d7db8.zip |
doc/release-notes: update luminous notes
Signed-off-by: Sage Weil <sage@redhat.com>
Diffstat (limited to '')
-rw-r--r-- | doc/release-notes.rst | 292 |
1 files changed, 271 insertions, 21 deletions
diff --git a/doc/release-notes.rst b/doc/release-notes.rst index 7386c068475..944fe322598 100644 --- a/doc/release-notes.rst +++ b/doc/release-notes.rst @@ -5,11 +5,276 @@ v12.1.0 Luminous (RC) ==================== -This is the first release candidate for Luminous, the next long term stable release. +This is the first release candidate for Luminous, the next long term +stable release. + +Ceph Luminous (v12.2.0) will be the foundation for the next long-term +stable release series. There have been major changes since Kraken +(v11.2.z) and Jewel (v10.2.z), and the upgrade process is non-trivial. +Please read these release notes carefully. Major Changes from Kraken ------------------------- +- *General*: + + * Ceph now has a simple, built-in web-based dashboard for monitoring + cluster status. FIXME DOCS. + +- *RADOS*: + + * *BlueStore*: + - The new *BlueStore* backend for *ceph-osd* is now stable and the new + default for newly created OSDs. BlueStore manages data stored by each OSD + by directly managing the physical HDDs or SSDs without the use of an + intervening file system like XFS. This provides greater performance + and features. FIXME DOCS + - BlueStore supports *full data and metadata checksums* of all + data stored by Ceph. + - BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph + also supports zstd for RGW compression but zstd is not recommended for + BlueStore for performance reasons.) FIXME DOCS + + * *Erasure coded* pools now have full support for *overwrites*, + allowing them to be used with RBD and CephFS. + + * *ceph-mgr*: + + - There is a new daemon, *ceph-mgr*, which is a required part of any + Ceph deployment. Although IO can continue when *ceph-mgr* is + down, metrics will not refresh and some metrics-related calls + (e.g., ``ceph df``) may block. We recommend deploying several instances of + *ceph-mgr* for reliability. See the notes on `Upgrading`_ below. + - The *ceph-mgr* daemon includes a REST-based management API. The + API is still experimental and somewhat limited but will form the basis + for API-based management of Ceph going forward. FIXME DOCS + + * The overall *scalability* of the cluster has improved. We have + successfully tested clusters with up to 10,000 OSDs. + * Each OSD can now have a *device class* associated with it (e.g., `hdd` or + `ssd`), allowing CRUSH rules to trivially map data to a subset of devices + in the system. Manually writing CRUSH rules or manual editing of the CRUSH + is normally not required. FIXME DOCS + * You can now *optimize CRUSH weights* can now be optimized to + maintain a *near-perfect distribution of data* across OSDs. FIXME DOCS + * There is also a new `upmap` exception mechanism that allows + individual PGs to be moved around to achieve a *perfect + distribution* (this requires luminous clients). FIXME DOCS + * Each OSD now adjusts its default configuration based on whether the + backing device is an HDD or SSD. Manual tuning generally not required. + * The prototype *mclock QoS queueing algorithm* is now available. FIXME DOCS + * There is now a *backoff* mechanism that prevents OSDs from being + overloaded by requests to objects or PGs that are not currently able to + process IO. + * There is a *simplified OSD replacement process* that is more robust. FIXME DOCS + * You can query the supported features and (apparent) releases of + all connected daemons and clients with ``ceph features``. FIXME DOCS + * You can configure the oldest Ceph client version you wish to allow to + connect to the cluster via ``ceph osd set-require-min-compat-client`` and + Ceph will prevent you from enabling features that will break compatibility + with those clients. FIXME DOCS + +- *RGW*: + + * RGW introduces *server side encryption* of uploaded objects with + three options for the management of encryption keys: automatic + encryption (only recommended for test setups), customer provided + keys similar to Amazon SSE-C specification, and through the use of + an external key management service (Openstack Barbician) similar + to Amazon SSE-KMS specification. + * RGW *metadata search* with ElasticSearch now supports end user requests + serviced via RGW itself and now supports custom metadata fields + * RGW now supports *dynamic bucket index sharding*. As the number + of objects in a bucket grows, RGW will automatically reshard the + bucket index in response. No user intervention or bucket size + capacity planning is required. + * RGW has consolidated the several metadata index pools via the use of rados + namespaces. + +- *RBD*: + + * RBD now has full, stable support for *erasure coded pools* via the new + ``--data-pool`` option to ``rbd create``. + * RBD mirroring's rbd-mirror daemon is now highly available. We + recommend deploying several instances of rbd-mirror for + reliability. + +- *CephFS*: + + * *Multiple active MDS daemons* is now considered stable. The number + of active MDS servers may be adjusted up or down on an active CephFS file + system. + * CephFS *directory fragmentation* is now stable and enabled by + default on new filesystems. To enable it on existing filesystems + use "ceph fs set <fs_name> allow_dirfrags". Large or very busy + directories are sharded and (potentially) distributed across + multiple MDS daemons automatically. + * Directory subtrees can be explicitly pinned to specific MDS daemons in + cases where the automatic load balancing is not desired or effective. + +Major Changes from Jewel +------------------------ + +- *RADOS*: + + * We now default to the AsyncMessenger (``ms type = async``) instead + of the legacy SimpleMessenger. The most noticeable difference is + that we now use a fixed sized thread pool for network connections + (instead of two threads per socket with SimpleMessenger). + * Some OSD failures are now detected almost immediately, whereas + previously the heartbeat timeout (which defaults to 20 seconds) + had to expire. This prevents IO from blocking for an extended + period for failures where the host remains up but the ceph-osd + process is no longer running. + * The size of encoded OSDMaps has been reduced. + * The OSDs now quiesce scrubbing when recovery or rebalancing is in progress. + +- *RGW*: + + * RGW now supports the S3 multipart object copy-part API. + * It is possible now to reshard an existing bucket offline. Offline + bucket resharding currently requires that all IO (especially + writes) to the specific bucket is quiesced. (For automatic online + resharding, see the new feature in Luminous above.) + * RGW now supports data compression for objects. + * Civetweb version has been upgraded to 1.8 + * The Swift static website API is now supported (S3 support has been added + previously). + * S3 bucket lifecycle API has been added. Note that currently it only supports + object expiration. + * Support for custom search filters has been added to the LDAP auth + implementation. + * Support for NFS version 3 has been added to the RGW NFS gateway. + * A Python binding has been created for librgw. + +- *RBD*: + + * The rbd-mirror daemon now supports replicating dynamic image + feature updates and image metadata key/value pairs from the + primary image to the non-primary image. + * The number of image snapshots can be optionally restricted to a + configurable maximum. + * The rbd Python API now supports asynchronous IO operations. + +- *CephFS*: + + * libcephfs function definitions have been changed to enable proper + uid/gid control. The library version has been increased to reflect the + interface change. + * Standby replay MDS daemons now consume less memory on workloads + doing deletions. + * Scrub now repairs backtrace, and populates `damage ls` with + discovered errors. + * A new `pg_files` subcommand to `cephfs-data-scan` can identify + files affected by a damaged or lost RADOS PG. + * The false-positive "failing to respond to cache pressure" warnings have + been fixed. + + +Upgrade from Jewel or Kraken +---------------------------- +.. _Upgrading + +#. Ensure that the ``sortbitwise`` flag is enabled:: + + # ceph osd set sortbitwise + +#. Make sure your cluster is stable and healthy (no down or + recoverying OSDs). (Optional, but recommended.) + +#. Set the ``noout`` flag for the duration of the upgrade. (Optional + but recommended.):: + + # ceph osd set noout + +#. Upgrade monitors by installing the new packages and restarting the + monitor daemons. Note that, unlike prior releases, the ceph-mon + daemons *msut* be upgraded first.:: + + # systemctl restart ceph-mon.target + + Verify the monitor upgrade is complete once all monitors are up by + looking for the ``lumninous`` feature string in the mon map. For + example:: + + # ceph osd dump -f json-pretty + + should include:: + + "features": { + "persistent": [ + "kraken", + "luminous" + ], + "optional": [] + }, + +#. Add or restart ``ceph-mgr`` daemons. If you are upgrading from + kraken, upgrade packages and restart ceph-mgr daemons with:: + + # systemctl ceph-mgr.target + + If you are upgrading from jewel, you should have new ceph-mgr daemons + automatically appear on the same hosts as the ceph-mon daemons. If not, + you can deploy new daemons with tools like ceph-deploy or ceph-ansible. For + example,:: + + # ceph-deploy mgr HOST + + Verify the ceph-mgr daemons are running by checking ``ceph -s``:: + + # ceph -s + ... + services: + mon: 3 daemons, quorum foo,bar,baz + mgr: foo(active), standbys: bar, baz + ... + +#. Upgrade all OSDs by installing the new packages and restarting the + ceph-osd daemons on all hosts.:: + + # systemctl ceph-osd.target + + You can monitor the progress of the OSD upgrades with the new + ``ceph osd versions`` command.:: + + # ceph osd versions + { + "ceph version 12.2.0 (...) luminous (stable)": 12, + "ceph version 10.2.6 (...)": 3, + } + +#. Upgrade all CephFS daemons by upgrading packages and restarting + daemons on all hosts.:: + + # systemctl restart ceph-mds.target + +#. Upgrade all radosgw daemons by upgrading packages and restarting + daemons on all hosts.:: + + # systemctl restart radosgw.target + +#. Complete the upgrade by disallowing pre-luminous OSDs:: + + # ceph osd require-osd-release luminous + + If you set ``noout`` at the beginning, be sure to clear it with:: + + # ceph osd unset noout + +#. Verify the cluster is healthy with ``ceph health``. + + +Upgrading from pre-Jewel releases (like Hammer) +----------------------------------------------- + +You *must* first upgrade to Jewel (10.2.z) before attempting an +upgrade to Luminous. + + +Upgrade compatibility notes, Kraken to Luminous +----------------------------------------------- + * When assigning a network to the public network and not to the cluster network the network specification of the public network will be used for the cluster network as well. @@ -80,9 +345,6 @@ Major Changes from Kraken string will no longer be able to set quotas or any layout fields. This flag previously only restricted modification of the pool and namespace fields in layouts. - * CephFS directory fragmentation (large directory support) is enabled - by default on new filesystems. To enable it on existing filesystems - use "ceph fs set <fs_name> allow_dirfrags". * CephFS will generate a health warning if you have fewer standby daemons than it thinks you wanted. By default this will be 1 if you ever had a standby, and 0 if you did not. You can customize this using @@ -91,26 +353,14 @@ Major Changes from Kraken * The "ceph mds tell ..." command has been removed. It is superceded by "ceph tell mds.<id> ..." -- *MGR* - - * ceph-mgr supports a default dashboard - * ceph-mgr introduces a new pecan based rest API -- *RGW*: +Notable Changes since Kraken +---------------------------- - * RGW introduces server side encryption of uploaded objects with 3 options for - the management of encryption keys, automatic encryption (only recommended for - test setups), customer provided keys similar to Amazon SSE-C specification and - using a key management service (Openstack Barbician) similar to Amazon SSE-KMS - specification. - * RGW's metadata search with ElasticSearch now supports end user requests - serviced via RGW itself and now supports custom metadata fields - * RGW has consolidated the several metadata index pools via the use of rados - namespaces - * RGW now supports dynamic bucket index sharding +build consolidated list before final release -Notable Changes ---------------- +Notable Changes since v12.0.3 +----------------------------- * bluestore: ceph-disk: add --filestore argument, default to --bluestore (`pr#15437 <https://github.com/ceph/ceph/pull/15437>`_, Loic Dachary, Sage Weil) * bluestore,core: os/bluestore: fix warning (`pr#15435 <https://github.com/ceph/ceph/pull/15435>`_, Sage Weil) |