diff options
author | Junio C Hamano <gitster@pobox.com> | 2018-09-17 22:53:53 +0200 |
---|---|---|
committer | Junio C Hamano <gitster@pobox.com> | 2018-09-17 22:53:53 +0200 |
commit | 7e794d0a3f7ad4a37541539b823d5b9afdc10ce3 (patch) | |
tree | e0ee853dbcf38a57195490ffc547039e4348de5f /preload-index.c | |
parent | Merge branch 'ds/reachable' (diff) | |
parent | Document update for nd/unpack-trees-with-cache-tree (diff) | |
download | git-7e794d0a3f7ad4a37541539b823d5b9afdc10ce3.tar.xz git-7e794d0a3f7ad4a37541539b823d5b9afdc10ce3.zip |
Merge branch 'nd/unpack-trees-with-cache-tree'
The unpack_trees() API used in checking out a branch and merging
walks one or more trees along with the index. When the cache-tree
in the index tells us that we are walking a tree whose flattened
contents is known (i.e. matches a span in the index), as linearly
scanning a span in the index is much more efficient than having to
open tree objects recursively and listing their entries, the walk
can be optimized, which is done in this topic.
* nd/unpack-trees-with-cache-tree:
Document update for nd/unpack-trees-with-cache-tree
cache-tree: verify valid cache-tree in the test suite
unpack-trees: add missing cache invalidation
unpack-trees: reuse (still valid) cache-tree from src_index
unpack-trees: reduce malloc in cache-tree walk
unpack-trees: optimize walking same trees with cache-tree
unpack-trees: add performance tracing
trace.h: support nested performance tracing
Diffstat (limited to 'preload-index.c')
-rw-r--r-- | preload-index.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/preload-index.c b/preload-index.c index 71cd2437a3..f7365761f4 100644 --- a/preload-index.c +++ b/preload-index.c @@ -78,7 +78,6 @@ static void preload_index(struct index_state *index, { int threads, i, work, offset; struct thread_data data[MAX_PARALLEL]; - uint64_t start = getnanotime(); if (!core_preload_index) return; @@ -88,6 +87,7 @@ static void preload_index(struct index_state *index, threads = 2; if (threads < 2) return; + trace_performance_enter(); if (threads > MAX_PARALLEL) threads = MAX_PARALLEL; offset = 0; @@ -109,7 +109,7 @@ static void preload_index(struct index_state *index, if (pthread_join(p->pthread, NULL)) die("unable to join threaded lstat"); } - trace_performance_since(start, "preload index"); + trace_performance_leave("preload index"); } #endif |