diff options
author | luo rixin <luorixin@huawei.com> | 2020-12-23 10:02:51 +0100 |
---|---|---|
committer | luo rixin <luorixin@huawei.com> | 2020-12-23 10:38:09 +0100 |
commit | 9089aad33a8738ed6c0022d4053f193c68209f2b (patch) | |
tree | 05cf0e6c8ffcafc6b2b12820f0f5fcc8eee04806 | |
parent | Merge PR #37721 into master (diff) | |
download | ceph-9089aad33a8738ed6c0022d4053f193c68209f2b.tar.xz ceph-9089aad33a8738ed6c0022d4053f193c68209f2b.zip |
commom/buffer: use small page aligned when bufferlist prealloc memory
On aarch64, when set pagesize to 64K, the memory is over used and
the cache memory always be over evicted. It is caused by BlueStore
ExtentMap which need to reserve 256B(the config option is
bluestore_extent_map_inline_shard_prealloc_size). When bufferlist
using page aligned way to prealloc small size memory from tcmalloc,
tcmalloc will allocate a page and can not reuse the unused memory
in this page. The more bluestore_onode items in meta cache, there
are more wasted memory in an OSD daemon.
Signed-off-by: luo rixin <luorixin@huawei.com>
-rw-r--r-- | src/common/buffer.cc | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/src/common/buffer.cc b/src/common/buffer.cc index d9b32163dba..8d7583e7adb 100644 --- a/src/common/buffer.cc +++ b/src/common/buffer.cc @@ -1269,7 +1269,7 @@ static ceph::spinlock debug_lock; void buffer::list::reserve(size_t prealloc) { if (get_append_buffer_unused_tail_length() < prealloc) { - auto ptr = ptr_node::create(buffer::create_page_aligned(prealloc)); + auto ptr = ptr_node::create(buffer::create_small_page_aligned(prealloc)); ptr->set_length(0); // unused, so far. _carriage = ptr.get(); _buffers.push_back(*ptr.release()); |