summaryrefslogtreecommitdiffstats
path: root/src (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #60157 from soumyakoduri/wip-skoduri-lc-nullinstanceSoumya Koduri2024-10-235-8/+30
|\ | | | | | | | | rgw/lc: Fix issues with non-current objects with instance empty Reviewed-by: Casey Bodley <cbodley@redhat.com>
| * rgw/lc: Fix issues with non-current objects with instance emptySoumya Koduri2024-10-225-8/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the bucket versioning is enabled, old plain object entry is converted to versioned by updating its instance as "null" in its raw head/old object. However its instance remains empty in the bi list entry. Same is the case for the entries created after versioning is suspended and re-enabled. So to access such objects which are non-current, we need to set rgw_obj_key.instance as 1) "null" to read the actual raw obj and 2) empty while accessing/updating their bi entry. Fixes: https://tracker.ceph.com/issues/68402 Signed-off-by: Soumya Koduri <skoduri@redhat.com>
* | Merge pull request #60258 from aclamk/wip-aclamk-cbt-improve-show-labelAdam Kupczyk2024-10-231-5/+10
|\ \ | |/ |/| os/bluestore/ceph-bluestore-tool: Modify show-label for many devs
| * os/bluestore/ceph-bluestore-tool: Modify show-label for many devsAdam Kupczyk2024-10-111-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was possible to give multiple devices to cbt: > ceph-bluestore-tool show-label --dev /dev/sda --dev /dev/sdb But is any of devices cannot provide valid label, nothing was printed. Now, always print results. Non readable labels are output as empty dictionaries. Exit code: - 0 if any label properly read - 1 if all labels failed Fixes: https://tracker.ceph.com/issues/68505 Signed-off-by: Adam Kupczyk <akupczyk@ibm.com>
* | Merge PR #60106 into mainPatrick Donnelly2024-10-222-4/+4
|\ \ | | | | | | | | | | | | | | | | | | | | | * refs/pull/60106/head: msg/async/ProtocolV2: pass `desc` as `std::string_view` to write() Reviewed-by: Patrick Donnelly <pdonnell@ibm.com> Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
| * | msg/async/ProtocolV2: pass `desc` as `std::string_view` to write()Max Kellermann2024-10-072-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All callers really pass a C string literal, and declaring a `std::string` parameter will implicitly create two `std::string` instances: one on the caller's stack, and another one inside write() as parameter to the continuation lambda. This causes considerable and unnecessary overhead. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
* | | Merge PR #60174 into mainPatrick Donnelly2024-10-224-84/+70
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/pull/60174/head: common/Finisher: pass name as std::string_view to ctor common/Finisher: add method get_thread_name() mgr/ActivePyModule: build thread name with fmt mgr/ActivePyModule: return std::string_view instead of std::string copy common/Finisher: use fmt to build strings common/Finisher: un-inline ctor and dtor common/Finisher: add `const` to several fields common/Finisher: merge duplicate field initializers common/Finisher: call notify_one() instead of notify_all() common/Finisher: wake up after pushing to the queue common/Finisher: do not wake up the thread if already running common/Finisher: call logger without holding the lock common/Finisher: use `std::lock_guard` instead of `std::unique_lock` common/Finisher: merge all queue() container methods into one template Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
| * | | common/Finisher: pass name as std::string_view to ctorMax Kellermann2024-10-102-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This eliminates a temporary `std::string`. Additionally, convert the `tn` parameter to an rvalue reference and move it to `thread_name`. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: add method get_thread_name()Max Kellermann2024-10-102-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | This allows eliminating the copy in `ActivePyModule`. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | mgr/ActivePyModule: build thread name with fmtMax Kellermann2024-10-101-1/+3
| | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | mgr/ActivePyModule: return std::string_view instead of std::string copyMax Kellermann2024-10-102-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | This impliciy heap allocation is unnecessary. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: use fmt to build stringsMax Kellermann2024-10-101-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is not only more efficient at runtime, but also shaves 500 bytes code off the binary. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: un-inline ctor and dtorMax Kellermann2024-10-102-25/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This aims to speed up compile times because constructor and destructor contain a lot of code that would be compiled in sources that do not call them. Also this allows removing the "common/perf_counters.h" include. Since there is now only one instantiation of these for all call sites, the binary size shrinks by nearly 1 kB. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: add `const` to several fieldsMax Kellermann2024-10-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | These are never changed; `const` prevents accidental changes and allows further compiler optimizations. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: merge duplicate field initializersMax Kellermann2024-10-101-8/+6
| | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: call notify_one() instead of notify_all()Max Kellermann2024-10-102-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | As noted in commit cc7ec3e18d1, there is only ever a single `Finisher` thread, therefore the overhead for `notify_all()` can be eliminated. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: wake up after pushing to the queueMax Kellermann2024-10-101-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pushing to the queue may take a long time when the `std::vector` needs to allocate more memory. We should wake up the `Finisher` thread only right before unlocking the `finisher_mutex` to reduce lock contention, because it is the more likely that the mutex can really be acquired when the thread really wakes up. This imitates how commit cc7ec3e18d191575c did it - it refactored only one of the `queue()` overloads, leaving less-than-optimal copies of this piece code in all other overloads. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: do not wake up the thread if already runningMax Kellermann2024-10-101-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If `finisher_running` is set, then the `Finisher` thread will automatically pick up new items queued by other threads. It is therefore not needed to wake it up and we can eliminate one system call. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: call logger without holding the lockMax Kellermann2024-10-101-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The `PerfCounters::inc()` method acquires another lock which can block the calling thread while holding the `finisher_lock` which can cause a lot of lock contention. This can be avoided easily by moving the `PerfCounters::inc()` call out of the protected code block. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: use `std::lock_guard` instead of `std::unique_lock`Max Kellermann2024-10-102-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `std::lock_guard` is all we need here, and the added complexity of `std::unique_lock` is not used and is usually optimized away by the compiler. Using `std::lock_guard` directly will reduce the amount of work that the optimizer needs to do and saves some build CPU cycles. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | common/Finisher: merge all queue() container methods into one templateMax Kellermann2024-10-101-31/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This merges some duplicate code. Only two overloads remain: one for single `Context` pointers and one for all containers. I tried to merge the former into the same template but that led to a larger binary (+7kB) because many pointer overloads were instantiated. This patch (with two overloads) only increases the binary by 8 bytes which is acceptable. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
* | | | Merge PR #60214 into mainPatrick Donnelly2024-10-223-136/+122
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/pull/60214/head: mds/MDCache: use `auto` mds/CDir: use the erase() return value mds/MDCache: remove unnecessary empty() check mds/MDCache: use the erase() return value mds/MDCache: pass iterator by value Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
| * | | | mds/MDCache: use `auto`Max Kellermann2024-10-091-99/+97
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | mds/CDir: use the erase() return valueMax Kellermann2024-10-091-5/+2
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | mds/MDCache: remove unnecessary empty() checkMax Kellermann2024-10-091-7/+5
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | mds/MDCache: use the erase() return valueMax Kellermann2024-10-092-26/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When erasing items from a linked list while iterating it, it is good practice (and safer and sometimes faster) to use the erase() return value instead of incrementing the iterator. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | mds/MDCache: pass iterator by valueMax Kellermann2024-10-092-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An iterator is just a pointer, and passing it by reference means we pass a pointer to a pointer, which is useless overhead. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
* | | | | Merge PR #60216 into mainPatrick Donnelly2024-10-2211-21/+23
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/pull/60216/head: common/options: pass name as rvalue reference common/config: use libfmt to build strings common/config: use emplace_back() instead of push_back() common/HeartbeatMap: pass name as rvalue reference common/config_obs_mgr: use the erase() return value common/SloppyCRCMap: use the erase() return value common: disable `boost::intrusive::constant_time_size` Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
| * | | | | common/options: pass name as rvalue referenceMax Kellermann2024-10-093-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | common/config: use libfmt to build stringsMax Kellermann2024-10-091-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The machine code is both smaller and faster. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | common/config: use emplace_back() instead of push_back()Max Kellermann2024-10-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | common/HeartbeatMap: pass name as rvalue referenceMax Kellermann2024-10-092-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This eliminates one temporary copy per call. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | common/config_obs_mgr: use the erase() return valueMax Kellermann2024-10-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | common/SloppyCRCMap: use the erase() return valueMax Kellermann2024-10-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | common: disable `boost::intrusive::constant_time_size`Max Kellermann2024-10-093-0/+3
| | |/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By default, the Boost intrusive containers enable the `constant_time_size` option which adds overhead to each modification for tracking the size in a field. We don't need that. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
* | | | | Merge PR #60220 into mainPatrick Donnelly2024-10-229-77/+59
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/pull/60220/head: msg/async/AsyncConnection: move the writeCallback instead of copying it msg/async/AsyncConnection: do not wrap writeCallback in `std::optional` msg/async/frames_v2: use zero-initialization instead of memset() msg/async/Event: use zero-initialization instead of memset() msg/Message: use zero-initialization instead of memset() msg/async/ProtocolV2: eliminate redundant std::map lookups msg/async/ProtocolV[12]: reverse the std::map sort order msg/async/ProtocolV[12]: use `auto` msg/async/ProtocolV[12]: use range-based `for` msg/async/ProtocolV1: use zero-initialization instead of memset() Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
| * | | | | msg/async/AsyncConnection: move the writeCallback instead of copying itMax Kellermann2024-10-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/AsyncConnection: do not wrap writeCallback in `std::optional`Max Kellermann2024-10-094-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since `std::function` is nullable and as an `operator bool()`, we can easily eliminate the `std::optional` overhead. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/frames_v2: use zero-initialization instead of memset()Max Kellermann2024-10-091-15/+5
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/Event: use zero-initialization instead of memset()Max Kellermann2024-10-091-5/+1
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/Message: use zero-initialization instead of memset()Max Kellermann2024-10-091-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/ProtocolV2: eliminate redundant std::map lookupsMax Kellermann2024-10-092-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/ProtocolV[12]: reverse the std::map sort orderMax Kellermann2024-10-094-6/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows eliminating one lookup in `_get_next_outgoing()` because we can pass the iterator instead of the key to `erase()`. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/ProtocolV[12]: use `auto`Max Kellermann2024-10-091-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/ProtocolV[12]: use range-based `for`Max Kellermann2024-10-092-13/+11
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
| * | | | | msg/async/ProtocolV1: use zero-initialization instead of memset()Max Kellermann2024-10-091-11/+6
| |/ / / / | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
* | | | | Merge PR #60324 into mainPatrick Donnelly2024-10-221-0/+2
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/pull/60324/head: mds/Beacon: set a thread name Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
| * | | | | mds/Beacon: set a thread nameMax Kellermann2024-10-151-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
* | | | | | Merge pull request #60225 from MaxKellermann/ceph_context_atomicCasey Bodley2024-10-211-0/+15
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | common/ceph_context: use std::atomic<std::shared_ptr<T>> Reviewed-by: Casey Bodley <cbodley@redhat.com>
| * | | | | | common/ceph_context: use std::atomic<std::shared_ptr<T>>Max Kellermann2024-10-111-0/+15
| | |_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes the compiler warning: src/common/ceph_context.h: In member function ‘std::shared_ptr<std::vector<entity_addrvec_t> > ceph::common::CephContext::get_mon_addrs() const’: src/common/ceph_context.h:288:36: warning: ‘std::shared_ptr<_Tp> std::atomic_load_explicit(const shared_ptr<_Tp>*, memory_order) [with _Tp = vector<entity_addrvec_t>]’ is deprecated: use 'std::atomic<std::shared_ptr<T>>' instead [-Wdeprecated-declarations] 288 | auto ptr = atomic_load_explicit(&_mon_addrs, std::memory_order_relaxed); | ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /usr/include/c++/14/bits/shared_ptr_atomic.h:133:5: note: declared here 133 | atomic_load_explicit(const shared_ptr<_Tp>* __p, memory_order) | ^~~~~~~~~~~~~~~~~~~~ The modernized version does not build with GCC 11, so this patch contains both versions for now, switched by a `__GNUC__` preprocessor check. Signed-off-by: Max Kellermann <max.kellermann@ionos.com>