summaryrefslogtreecommitdiffstats
path: root/systemd (follow)
Commit message (Collapse)AuthorAgeFilesLines
* systemd: remove `ProtectClock=true` for `ceph-osd@.service`Wong Hoi Sing Edison2021-04-149-10/+1
| | | | | | | | | | | | | | | | | | | | | Ceph 16.2.0 Pacific by https://github.com/ceph/ceph/commit/9a84d5a introduce following new systemd restriction: ProtectClock=true ProtectHostname=true ProtectKernelLogs=true RestrictSUIDSGID=true BTW, `ceph-osd@.service` failed with `ProtectClock=true` unexpectly, also see: - <https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TNBGGNN6STGDKARAQTQCIPTU4KLIVJQV/> - <https://serverfault.com/questions/1059317/bluestore-var-lib-ceph-osd-ceph-2-block-read-bdev-label-failed-to-open-var-l> This PR intruduce: - Remove `ProtectClock=true` for our systemd service templates Fixes: https://tracker.ceph.com/issues/50347 Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
* systemd: cephfs-mirror systemd unit filesVenky Shankar2021-01-124-0/+43
| | | | Signed-off-by: Venky Shankar <vshankar@redhat.com>
* systemd: Support Graceful Reboot for AIO NodeWong Hoi Sing Edison2020-09-1815-16/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ceph AIO installation with single/multiple node is not friendly for loopback mount, especially always get deadlock issue during graceful system reboot. We already have `rbdmap.service` with graceful system reboot friendly as below: [Unit] After=network-online.target Before=remote-fs-pre.target Wants=network-online.target remote-fs-pre.target [Service] ExecStart=/usr/bin/rbdmap map ExecReload=/usr/bin/rbdmap map ExecStop=/usr/bin/rbdmap unmap-all This PR introduce: - `ceph-mon.target`: Ensure startup after `network-online.target` and before `remote-fs-pre.target` - `ceph-*.target`: Ensure startup after `ceph-mon.target` and before `remote-fs-pre.target` - `rbdmap.service`: Once all `_netdev` get unmount by `remote-fs.target`, ensure unmap all RBD BEFORE any Ceph components under `ceph.target` get stopped during shutdown The logic is concept proof by <https://github.com/alvistack/ansible-role-ceph_common/tree/develop>; also works as expected with Ceph + Kubernetes deployment by <https://github.com/alvistack/ansible-collection-kubernetes/tree/develop>. No more deadlock happened during graceful system reboot, both AIO single/multiple no de with loopback mount. Also see: - <https://github.com/ceph/ceph/pull/36776> - <https://github.com/etcd-io/etcd/pull/12259> - <https://github.com/cri-o/cri-o/pull/4128> - <https://github.com/kubernetes/release/pull/1504> Fixes: https://tracker.ceph.com/issues/47528 Signed-off-by: Wong Hoi Sing Edison <hswong3i@gmail.com>
* systemd/ceph-osd: ceph-osd-prestart.sh now lives in /usr/libexecJan Fajerski2020-06-121-1/+1
| | | | | | | Fixes: https://tracker.ceph.com/issues/45984 Fixes: ed6552d5067c9f1d34c426f9ae18b0c37f2a9d29 Signed-off-by: Jan Fajerski <jfajerski@suse.com>
* systemd: lock down more privilegesPatrick Donnelly2020-05-098-57/+90
| | | | | | | | | | | | | | | | | Including: ProtectClock=true ProtectHostname=true ProtectKernelLogs=true RestrictSUIDSGID=true Also, alphabetize [service] settings. Finally, add some protections for systemd/ceph-immutable-object-cache@.service.in present in our other service files but not this one. Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* systemd: Wait 5 seconds before attempting a restart of an OSDWido den Hollander2019-11-121-0/+1
| | | | | | | | | | | | | | | | | | In commit 92f8ec the RestartSec parameter was removed which now causes systemd to restart a failed OSD immediately. After a reboot, while the network is still coming online, this can cause problems. Although network-online.target should guarantee us that the network is online it doesn't guarantee that DNS resolving works. If mon_host points to a DNS entry it could be that this cannot be resolved yet and thus fails to start the OSDs on boot. Fixes: https://tracker.ceph.com/issues/42761 Signed-off-by: Wido den Hollander <wido@42on.com>
* systemd: ceph-mgr: set MemoryDenyWriteExecute to falseRicardo Dias2019-05-091-1/+5
| | | | | | Fixes: http://tracker.ceph.com/issues/39628 Signed-off-by: Ricardo Dias <rdias@suse.com>
* build/ops: adding build spec for immutable object cache daemonYuan Zhou2019-03-211-0/+2
| | | | Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
* tools: adding ceph level immutable obj cache daemonYuan Zhou2019-03-212-0/+30
| | | | | | | | | | | | The daemon is built for future integration with both RBD and RGW cache. The key components are: - domain socket based simple IPC - simple LRU policy based promotion/demotion for the cache - simple file based caching store for RADOS objs with sync IO interface - systemd service/target files for the daemon Signed-off-by: Dehao Shang <dehao.shang@intel.com> Signed-off-by: Yuan Zhou <yuan.zhou@intel.com>
* systemd: lock down privileges morePatrick Donnelly2019-02-077-1/+51
| | | | Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* systemd: enable ceph-rbd-mirror.targetSébastien Han2018-11-051-0/+1
| | | | | | | | | Without this the rbd-mirror units will never start after a system reboot. The rbd-mirror unit requires ceph-rbd-mirror.target to start since it currently does not get enabled the daemon won't start after a reboot. Signed-off-by: Sébastien Han <seb@redhat.com>
* Merge pull request #22349 from gregsfortytwo/wip-24368-osd-restartsGregory Farnum2018-10-191-2/+1
|\ | | | | | | | | systemd: only restart 3 times in 30 minutes, as fast as possible Reviewed-by: Sage Weil <sage@redhat.com>
| * systemd: only restart 3 times in 30 minutes, as fast as possibleGreg Farnum2018-06-011-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once upon a time, we configured our init systems to only restart an OSD 3 times in a 30 minute period. This made sure a permanently-slow OSD would stay dead, and that an OSD which was dying on boot (but only after a long boot process) would not insist on rejoining the cluster for *too* long. In 62084375fa8370ca3884327b4a4ad28e0281747e, Boris applied these same rules to systemd in a great bid for init system consistency. Hurray! Sadly, Loic discovered that the great dragons udev and ceph-disk were susceptible to races under systemd (that we apparently didn't see with the other init systems?), and our 3x start limit was preventing the system from sorting them out. In b3887379d6dde3b5a44f2e84cf917f4f0a0cb120 he configured the system to allow *30* restarts in 30 minutes, but no more frequently than every 20 seconds. So that resolved the race issue, which was far more immediately annoying than any concern about OSDs sometimes taking too long to die. But I've started hearing in-person reports about OSDs not failing hard and fast when they go bad, and I attribute some of those reports to these init system differences. Happily, we no longer rely on udev and ceph-disk, and ceph-volume shouldn't be susceptible to the same race, so I think we can just go back to the old way. Partly-reverts: b3887379d6dde3b5a44f2e84cf917f4f0a0cb120 Partly-fixes: http://tracker.ceph.com/issues/24368 Signed-off-by: Greg Farnum <gfarnum@redhat.com>
* | add ceph-crash serviceDan Mick2018-08-093-0/+15
| | | | | | | | | | | | | | | | ceph-crash runs from systemd and watches /var/lib/ceph/crash for crashdumps, posting them to the mgrs using the mgr's crash plugin Signed-off-by: Dan Mick <dan.mick@redhat.com>
* | systemd/rbdmap.service: order us before remote-fs-pre.targetIlya Dryomov2018-06-291-1/+2
| | | | | | | | | | | | | | | | | | | | If "/usr/bin/rbdmap unmap-all" notices a file system mounted on top of an rbd device, it will call umount, interfering with systemd shutdown logic. Make sure we aren't invoked until all _netdev mounts are dealt with by systemd. Fixes: http://tracker.ceph.com/issues/24713 Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | systemd/rbdmap.service: remove a dependency on local-fs.targetIlya Dryomov2018-06-281-2/+2
| | | | | | | | | | | | We don't require anything outside of rootfs. Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* | systemd: remove ceph-disk from CMakeListsAlfredo Deza2018-06-131-1/+0
| | | | | | | | Signed-off-by: Alfredo Deza <adeza@redhat.com>
* | systemd: remove ceph-disk serviceAlfredo Deza2018-06-131-11/+0
|/ | | | Signed-off-by: Alfredo Deza <adeza@redhat.com>
* cmake: s/sysconf/sysconfig/Kefu Chai2018-02-281-1/+1
| | | | | | it's a regression caused by 638aadf Signed-off-by: Kefu Chai <kchai@redhat.com>
* cmake,deb: set EnvironmentFile using cmakeKefu Chai2018-02-279-16/+29
| | | | | | | | | | this change also fix the EnvironmentFile specified in rbdmap.service. without this change EnvironmentFile in rbdmap.service is always /etc/sysconfig/ceph even on debian derived distros. after this change, this variable is /etc/default/ceph in rbdmap.service shipped by the deb packages. Signed-off-by: Kefu Chai <kchai@redhat.com>
* debian: install system units using cmakeKefu Chai2018-02-271-1/+3
| | | | Signed-off-by: Kefu Chai <kchai@redhat.com>
* systemd: Wait 10 seconds before restarting ceph-mgrWido den Hollander2018-02-221-0/+1
| | | | | | | | | | | | | We do this for the MON and OSD as well, wait for a few seconds before we try to attempt a restart. On boot in IPv6 networks it might take a few seconds longer before a IP-address is usable and this does not allow the mgr to start right away. Fixes: http://tracker.ceph.com/issues/23083 Signed-off-by: Wido den Hollander <wido@42on.com>
* build/ops: rpm: rip out rcceph scriptNathan Cutler2018-01-151-65/+0
| | | | | | | | | | | | | | "rcceph" is a SysVinit-style command-line interface for stopping, starting, enabling, etc. all ceph-osd and ceph-mon systemd units on a machine, in one go. Since the same functionality is provided by ceph-{osd,mon}.target, the script is obsolete. It is also unmaintained. Judging from the absence of recent mentions of the script online, I guess it is no longer used. Leaving dead code in the tree can cause confusion, especially when the code is packaged and shipped to customers. Therefore I propose to rip it out. Signed-off-by: Nathan Cutler <ncutler@suse.com>
* rbd-mirorr: does not start on rebootSébastien Han2017-09-261-0/+1
| | | | | | | | | | | | The current systemd unit file misses 'PartOf=ceph-rbd-mirror.target', which results in the unit not starting after reboot. If you have ceph-rbd-mirror@rbd-mirror.ceph-rbd-mirror0, it won't start after reboot even if enabled. Adding 'PartOf=ceph-rbd-mirror.target' will enable ceph-rbd-mirror.target when ceph-rbd-mirror@rbd-mirror.ceph-rbd-mirror0 gets enabled. Signed-off-by: Sébastien Han <seb@redhat.com>
* Merge pull request #16494 from asomers/bin_bashKefu Chai2017-08-271-1/+1
|\ | | | | | | | | | | misc: Fix bash path in shebangs Reviewed-by: Willem Jan Withagen <wjw@digiware.nl> Reviewed-by: Kefu Chai <kchai@redhat.com>
| * scripts: fix bash path in shebangsAlan Somers2017-07-271-1/+1
| | | | | | | | | | | | | | /bin/bash is a Linuxism. Other operating systems install bash to different paths. Use /usr/bin/env in shebangs to find bash. Signed-off-by: Alan Somers <asomers@gmail.com>
* | systemd: include the ceph-volume serviceAlfredo Deza2017-08-041-0/+1
| | | | | | | | Signed-off-by: Alfredo Deza <adeza@redhat.com>
* | systemd: create a service file for ceph-volumeAlfredo Deza2017-08-041-0/+14
|/ | | | Signed-off-by: Alfredo Deza <adeza@redhat.com>
* Merge pull request #15835 from SUSE/wip-flatten-systemd-target-hierarchy-masterSage Weil2017-07-037-0/+7
|\ | | | | | | | | | | | | systemd: Add explicit Before=ceph.target Reviewed-by: Nathan Cutler <ncutler@suse.com> Reviewed-by: Boris Ranto <branto@redhat.com>
| * systemd: Add explicit Before=ceph.targetTim Serong2017-06-307-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PartOf= and WantedBy= directives in the various systemd unit files and targets create the following logical hierarchy: - ceph.target - ceph-fuse.target - ceph-fuse@.service - ceph-mds.target - ceph-mds@.service - ceph-mgr.target - ceph-mgr@.service - ceph-mon.target - ceph-mon@.service - ceph-osd.target - ceph-osd@.service - ceph-radosgw.target - ceph-radosgw@.service - ceph-rbd-mirror.target - ceph-rbd-mirror@.service Additionally, the ceph-{fuse,mds,mon,osd,radosgw,rbd-mirror} targets have WantedBy=multi-user.target. This gives the following behaviour: - `systemctl {start,stop,restart}` of any target will restart all dependent services (e.g.: `systemctl restart ceph.target` will restart all services; `systemctl restart ceph-mon.target` will restart all the mons, and so forth). - `systemctl {enable,disable}` for the second level targets (ceph-mon.target etc.) will cause depenent services to come up on boot, or not (of course the individual services can be enabled or disabled as well - for a service to start on boot, both the service and its target must be enabled; disabling either will cause the service to be disabled). - `systemctl {enable,disable} ceph.target` has no effect on whether or not services come up at boot; if the second level targets and services are enabled, they'll start regardless of whether ceph.target is enabled. This is due to the second level targets all having WantedBy=multi-user.target. - The OSDs will always start regardless of ceph-osd.target (unless they are explicitly masked), thanks to udev magic. So far, so good. Except, several users have encountered services not starting with the following error: Failed to start ceph-osd@5.service: Transaction order is cyclic. See system logs for details. I've not been able to reproduce this myself in such a way as to cause OSDs to fail to start, but I *have* managed to get systemd into that same confused state, as follows: - Disable ceph.target, ceph-mon.target, ceph-osd.target, ceph-mon@$(hostname).service and all ceph-osd instances. - Re-enable all of the above. At this point, everything is fine, but if I then subseqently disable ceph.target, *then* try `systemctl restart ceph.target`, I get "Failed to restart ceph.target: Transaction order is cyclic. See system logs for details." Explicitly adding Before=ceph.target to each second level target prevents systemd from becoming confused in this situation. Signed-off-by: Tim Serong <tserong@suse.com>
* | Merge pull request #15585 from dachary/wip-20229-ceph-disk-timeoutSage Weil2017-07-011-1/+1
|\ \ | | | | | | | | | | | | | | | ceph-disk: set the default systemd unit timeout to 3h Reviewed-by: Josh Durgin <jdurgin@redhat.com>
| * | ceph-disk: set the default systemd unit timeout to 3hLoic Dachary2017-06-081-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There needs to be a timeout to prevent ceph-disk from hanging forever. But there is no good reason to set it to a value that is less than a few hours. Each OSD activation needs to happen in sequence and not in parallel, reason why there is a global activation lock. It would be possible, when an OSD is using a device that is not otherwise used by another OSD (i.e. they do not share an SSD journal device etc.), to run all activations in parallel. It would however require a more extensive modification of ceph-disk to avoid any chances of races. Fixes: http://tracker.ceph.com/issues/20229 Signed-off-by: Loic Dachary <loic@dachary.org>
* / systemd/ceph-mgr: remove automagic mgr creation hackSage Weil2017-06-292-15/+3
|/ | | | | | | | | | | | | | | | | For kraken we auto-created mgr daemons next to mon daemons with some systemd hackery. This is awkward (you can't not get a new mgr daemon when you deploy a mon), systemd-specific (not implemented for upstart on trusty), and mostly unexpected. Since ceph-mgr daemons are now first-class citizens and required for every cluster, make their deployment explicit and transparent to the administrator. Major upgrades are a rare opportunity to have the administrator's full attention so take advantage of it. This effectively reverts 61d779345e9efbe9a2e3f215af1f1dcf6630f04a and 082199f69dd0bd4c18a5f4baea67a88782586657 (and follow-on fixes). Fixes/avoids: http://tracker.ceph.com/issues/19994 Signed-off-by: Sage Weil <sage@redhat.com>
* systemd: update mgr auth capsJohn Spray2017-05-031-1/+1
| | | | | | | Granting it 'allow *' on mon and osd so that it can use MCommand to remote control daemons. Signed-off-by: John Spray <john.spray@redhat.com>
* systemd/ceph-mgr@.service: fix mgr mon capSage Weil2017-03-291-1/+3
| | | | Signed-off-by: Sage Weil <sage@redhat.com>
* Merge pull request #13197 from asheplyakov/master-18740Kefu Chai2017-03-241-1/+2
|\ | | | | | | | | | | systemd/ceph-disk: make it possible to customize timeout Reviewed-by: Loic Dachary <ldachary@redhat.com> Reviewed-by: Kefu Chai <kchai@redhat.com>
| * systemd/ceph-disk: make it possible to customize timeoutAlexey Sheplyakov2017-02-061-1/+2
| | | | | | | | | | | | | | | | | | | | When booting a server with 20+ HDDs udev has to process a *lot* of events (especially if dm-crypt is used), and 2 minutes might be not enough for that. Make it possible to override the timeout (via systemd drop-in files), and use a longer timeout (5 minutes) by default. Fixes: http://tracker.ceph.com/issues/18740 Signed-off-by: Alexey Sheplyakov <asheplyakov@mirantis.com>
* | rbdmap: unmap RBDMAPFILE images unless called with unmap-allDavid Disseldorp2017-02-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When called with a "map" parameter, the rbdmap script iterates the list of images present in RBDMAPFILE (/etc/ceph/rbdmap), and maps each entry. When called with "unmap", rbdmap currently iterates *all* mapped RBD images and unmaps each one, regardless of whether it's listed in the RBDMAPFILE or not. This commit adds functionality such that only RBD images listed in the configuration file are unmapped. This behaviour is the new default for "rbdmap unmap". A new "unmap-all" parameter is added to offer the old unmap-all-rbd-images behaviour, which is used by the systemd service. Fixes: http://tracker.ceph.com/issues/18884 Signed-off-by: David Disseldorp <ddiss@suse.de>
* | Merge pull request #13097 from ceph/wip-osd-after-monNathan Cutler2017-02-091-1/+1
|\ \ | |/ |/| | | | | | | | | systemd: Start OSDs after MONs Reviewed-by: Gregory Farnum <gfarnum@redhat.com> Reviewed-by: Ken Dreyer <kdreyer@redhat.com> Reviewed-by: Nathan Cutler <ncutler@suse.com>
| * systemd: Start OSDs after MONsBoris Ranto2017-01-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | Currently, we start/stop OSDs and MONs simultaneously. This may cause problems especially when we are shutting down the system. Once the mon goes down it causes a re-election and the MONs can miss the message from the OSD that is going down. Resolves: http://tracker.ceph.com/issues/18516 Signed-off-by: Boris Ranto <branto@redhat.com>
* | systemd: Restart Mon after 10s in case of failureWido den Hollander2017-01-231-1/+2
|/ | | | | | | | | | | | | | | | | | In some situations the IP address the Monitor wants to bind to might not be available yet. This might for example be a IPv6 Address which is still performing DAD or waiting for a Router Advertisement to be send by the Router(s). Have systemd wait for 10s before starting the Mon and increase the amount of times it does so to 5. This allows the system to bring up IP Addresses in the mean time while systemd waits with restarting the Mon. Fixes: #18635 Signed-off-by: Wido den Hollander <wido@42on.com>
* Fix startup of Ceph cluster manager daemon on Debian 8Mark Korenberg2016-12-181-3/+5
| | | | Signed-off-by: Mark Korenberg <socketpair@gmail.com>
* Merge pull request #11542 from batrick/systemd-ceph-fuseJohn Spray2016-12-143-0/+25
|\ | | | | | | | | systemd: add ceph-fuse service file Reviewed-by: John Spray <john.spray@redhat.com>
| * systemd: add ceph-fuse service filePatrick Donnelly2016-12-023-0/+25
| | | | | | | | Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* | build/ops: restart ceph-osd@.service after 20s instead of 100msLoic Dachary2016-12-011-1/+2
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of the default 100ms pause before trying to restart an OSD, wait 20 seconds instead and retry 30 times instead of 3. There is no scenario in which restarting an OSD almost immediately after it failed would get a better result. It is possible that a failure to start is due to a race with another systemd unit at boot time. For instance if ceph-disk@.service is delayed, it may start after the OSD that needs it. A long pause may give the racing service enough time to complete and the next attempt to start the OSD may succeed. This is not a sound alternative to resolve a race, it only makes the OSD boot process less sensitive. In the example above, the proper fix is to enable --runtime ceph-osd@.service so that it cannot race at boot time. The wait delay should not be minutes to preserve the current runtime behavior. For instance, if an OSD is killed or fails and restarts after 10 minutes, it will be marked down by the ceph cluster. This is not a change that could break things but it is significant and should be avoided. Refs: http://tracker.ceph.com/issues/17889 Signed-off-by: Loic Dachary <loic@dachary.org>
* systemd/ceph-disk: reduce ceph-disk flock contentionDavid Disseldorp2016-11-281-1/+1
| | | | | | | | | | | | | | | | | | "ceph-disk trigger" invocation is currently performed in a mutually exclusive fashion, with each call first taking an flock on the path /var/lock/ceph-disk. On systems with a lot of osds, this leads to a large amount of lock contention during boot-up, and can cause some service instances to trip the 120 second timeout. Take an flock on a device specific path instead of /var/lock/ceph-disk, so that concurrent "ceph-disk trigger" invocations are permitted for independent osds. This greatly reduces lock contention and consequently the chance of service timeout. Per-device concurrency restrictions required for http://tracker.ceph.com/issues/13160 are maintained. Fixes: http://tracker.ceph.com/issues/18049 Signed-off-by: David Disseldorp <ddiss@suse.de>
* ceph-disk: systemd unit must run after local-fs.targetLoic Dachary2016-11-221-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | A ceph udev action may be triggered before the local file systems are mounted because there is no ordering in udev. The ceph udev action delegates asynchronously to systemd via ceph-disk@.service which will fail if (for instance) the LVM partition required to mount /var/lib/ceph is not available yet. The systemd unit will retry a few times but will eventually fail permanently. The sysadmin can systemctl reset-fail at a later time and it will succeed. Add a dependency to ceph-disk@.service so that it waits until the local file systems are mounted: After=local-fs.target Since local-fs.target depends on lvm, it will wait until the lvm partition (as well as any dm devices) is ready and mounted before attempting to activate the OSD. It may still fail because the corresponding journal/data partition is not ready yet (which is expected) but it will no longer fail because the lvm/filesystems/dm are not ready. Fixes: http://tracker.ceph.com/issues/17889 Signed-off-by: Loic Dachary <loic@dachary.org>
* systemd/CMakeLists.txt:Remove ceph-create-keys cmakeOwen Synge2016-11-041-1/+0
| | | | | | | ceph-create-keys should not be started on boot of mons with systemd so should not exist as 'After' or 'Wants' for the ceph-mon.service Signed-off-by: Owen Synge <osynge@suse.com>
* systemd/ceph-mon@.service:Remove ceph-create-keys for mon in systemdOwen Synge2016-11-041-2/+2
| | | | | | | ceph-create-keys should not be started on boot of mons with systemd so should not exist as 'After' or 'Wants' for the ceph-mon.service Signed-off-by: Owen Synge <osynge@suse.com>
* systemd/ceph-create-keys@.service:Remove ceph-create-keys for systemdOwen Synge2016-11-041-10/+0
| | | | | | | ceph-create-keys should not be started on boot of mons with systemd so should not exist in the systemd files Signed-off-by: Owen Synge <osynge@suse.com>