summaryrefslogtreecommitdiffstats
path: root/PendingReleaseNotes
blob: 92f83d1316e36bba432c4c426387bc5919f619e5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
>=16.0.0
--------

* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
  by default. However, if enabled, user now have to pass the
  ``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
  sure of configuring pool size 1.

* librbd now inherits the stripe unit and count from its parent image upon creation.
  This can be overridden by specifying different stripe settings during clone creation.

* The balancer is now on by default in upmap mode. Since upmap mode requires
  ``require_min_compat_client`` luminous, new clusters will only support luminous
  and newer clients by default. Existing clusters can enable upmap support by running
  ``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
  the balancer off using the ``ceph balancer off`` command. In earlier versions,
  the balancer was included in the ``always_on_modules`` list, but needed to be
  turned on explicitly using the ``ceph balancer on`` command.

* Cephadm: There were a lot of small usability improvements and bug fixes:

  * Grafana when deployed by Cephadm now binds to all network interfaces.
  * ``cephadm check-host`` now prints all detected problems at once.
  * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
    when generating an SSL certificate for Grafana.
  * The Alertmanager is now correctly pointed to the Ceph Dashboard
  * ``cephadm adopt`` now supports adopting an Alertmanager
  * ``ceph orch ps`` now supports filtering by service name
  * ``ceph orch host ls`` now marks hosts as offline, if they are not
    accessible.

* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
  a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
  nfs-ns::

    ceph orch apply nfs mynfs nfs-ganesha nfs-ns

* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
  yaml representation that is consumable by ``ceph orch apply``. In addition,
  the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
  ``--format json-pretty``.

* CephFS: Automatic static subtree partitioning policies may now be configured
  using the new distributed and random ephemeral pinning extended attributes on
  directories. See the documentation for more information:
  https://docs.ceph.com/docs/master/cephfs/multimds/

* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
  the OSD specification before deploying OSDs. This makes it possible to
  verify that the specification is correct, before applying it.

* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
  ``radosgw-admin orphans find``, ``radosgw-admin orphans finish``, and
  ``radosgw-admin orphans list-jobs`` -- have been deprecated. They have
  not been actively maintained and they store intermediate results on
  the cluster, which could fill a nearly-full cluster.  They have been
  replaced by a tool, currently considered experimental,
  ``rgw-orphan-list``.

* RBD: The name of the rbd pool object that is used to store
  rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
  to "rbd_trash_purge_schedule". Users that have already started using
  ``rbd trash purge schedule`` functionality and have per pool or namespace
  schedules configured should copy "rbd_trash_trash_purge_schedule"
  object to "rbd_trash_purge_schedule" before the upgrade and remove
  "rbd_trash_purge_schedule" using the following commands in every RBD
  pool and namespace where a trash purge schedule was previously
  configured::

    rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
    rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule

  or use any other convenient way to restore the schedule after the
  upgrade.


>=16.0.0
---------

* librbd: The shared, read-only parent cache has been moved to a separate librbd
  plugin. If the parent cache was previously in-use, you must also instruct
  librbd to load the plugin by adding the following to your configuration::

    rbd_plugins = parent_cache

* Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default.
  If any OSD has repaired more than this many I/O errors in stored data a
 ``OSD_TOO_MANY_REPAIRS`` health warning is generated.

* Introduce commands that manipulate required client features of a file system::

    ceph fs required_client_features <fs name> add <feature>
    ceph fs required_client_features <fs name> rm <feature>
    ceph fs feature ls

* OSD: A new configuration option ``osd_compact_on_start`` has been added which triggers
  an OSD compaction on start. Setting this option to ``true`` and restarting an OSD
  will result in an offline compaction of the OSD prior to booting.

* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
  scheduled scrubs of the type disabled will be aborted. All user initiated
  scrubs are NOT interrupted.