summaryrefslogtreecommitdiffstats
path: root/object.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2025-01-01l10n: tr: Update Turkish translations for 2.48Emir SARI1-68/+225
Signed-off-by: Emir SARI <emir_sari@icloud.com>
2024-12-30Git 2.48-rc1v2.48.0-rc1Junio C Hamano1-0/+3
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30parse-options: localize mark-up of placeholder text in the short helpAlexander Shopov1-3/+40
i18n: expose substitution hint chars in functions and macros to translators For example (based on builtin/commit.c and shortened): the "--author" option takes a name. In source this can be represented as: OPT_STRING(0, "author", &force_author, N_("author"), N_("override author")), When the command is run with "-h" (short help) option (git commit -h), the above definition is displayed as: --[no-]author <author> override author Git does not use translated option names so the first part of the above, "--[no-]author", is given as-is (it is based on the 2nd argument of OPT_STRING). However the string "author" in the pair of "<>", and the explanation "override author for commit" may be translated into user's language. The user's language may use a convention to mark a replaceable part of the command line (called a "placeholder string") differently from enclosing it inside a pair of "<>", but the implementation in parse-options.c hardcodes "<%s>". Allow translators to specify the presentation of a placeholder string for their languages by overriding the "<%s>". In case the translator's writing system is sufficiently different than Latin the "<>" characters can be substituted by an empty string thus effectively skipping them in the output. For example languages with uppercase versions of characters can use that to deliniate replaceability. Alternatively a translator can decide to use characters that are visually close to "<>" but are not interpreted by the shell. Signed-off-by: Alexander Shopov <ash@kambanaria.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: provide a summary of configured backendsPatrick Steinhardt1-0/+7
There are a couple of backends from which the user can choose for HTTPS, SHA1, its unsafe variant as well as SHA256. Provide a summary of the configured values to make these more discoverable. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: wire up unsafe SHA1 backendPatrick Steinhardt2-10/+32
In 06c92dafb8 (Makefile: allow specifying a SHA-1 for non-cryptographic uses, 2024-09-26), we have introduced a cryptographically-insecure backend for SHA1 that can optionally be used in some contexts where the processed data is not security relevant. This effort was in-flight with the effort to introduce Meson, so we don't have an equivalent here. Wire up a new build option that lets users pick an unsafe SHA1 backend. Note that for simplicity's sake we have to drop the error condition around an unhandled SHA1 backend. This should be fine though given that Meson verifies the value for combo-options for us. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: add missing dots for build optionsPatrick Steinhardt1-2/+2
Most of our Meson build options end with a trailing dot, but those for our SHA1 and SHA256 backends don't. Add it. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: simplify conditions for HTTPS and SHA1 dependenciesPatrick Steinhardt1-2/+2
The conditions used to figure out whteher the Security framework or OpenSSL library is required are a bit convoluted because they can be pulled in via the HTTPS, SHA1 or SHA256 backends. Refactor them to be easier to read. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: require SecurityFramework when it's used as SHA1 backendPatrick Steinhardt1-1/+1
The Security framework is required when we use CommonCrypto either as HTTPS or SHA1 backend, but we only require it in case it is set up as HTTPS backend. Fix this. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: deduplicate access to SHA1/SHA256 backend optionsPatrick Steinhardt1-3/+3
We've got a couple of repeated calls to `get_option()` for the SHA1 and SHA256 backend options. While not an issue, it makes the code needlessly verbose. Fix this by consistently using a local variable. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30meson: consistenlty spell 'CommonCrypto'Patrick Steinhardt2-2/+2
The 'CommonCrypto' backend can be specified as HTTPS and SHA1 backends, but the value that one needs to use is inconsistent across those two build options. Unify it to 'CommonCrypto'. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30ci: exercise unsafe OpenSSL backendPatrick Steinhardt1-0/+1
In the preceding commit we have fixed a segfault when using an unsafe SHA1 backend that is different from the safe one. This segfault only went by unnoticed because we never set up an unsafe backend in our CI systems. Fix this ommission by setting `OPENSSL_SHA1_UNSAFE` in our TEST-vars job. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30builtin/fast-import: fix segfault with unsafe SHA1 backendPatrick Steinhardt1-1/+1
Same as with the preceding commit, git-fast-import(1) is using the safe variant to initialize a hashfile checkpoint. This leads to a segfault when passing the checkpoint into the hashfile subsystem because it would use the unsafe variants instead: ++ git --git-dir=R/.git fast-import --big-file-threshold=1 AddressSanitizer:DEADLYSIGNAL ================================================================= ==577126==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000040 (pc 0x7ffff7a01a99 bp 0x5070000009c0 sp 0x7fffffff5b30 T0) ==577126==The signal is caused by a READ memory access. ==577126==Hint: address points to the zero page. #0 0x7ffff7a01a99 in EVP_MD_CTX_copy_ex (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) #1 0x555555ddde56 in openssl_SHA1_Clone ../sha1/openssl.h:40:2 #2 0x555555dce2fc in git_hash_sha1_clone_unsafe ../object-file.c:123:2 #3 0x555555c2d5f8 in hashfile_checkpoint ../csum-file.c:211:2 #4 0x5555559647d1 in stream_blob ../builtin/fast-import.c:1110:2 #5 0x55555596247b in parse_and_store_blob ../builtin/fast-import.c:2031:3 #6 0x555555967f91 in file_change_m ../builtin/fast-import.c:2408:5 #7 0x55555595d8a2 in parse_new_commit ../builtin/fast-import.c:2768:4 #8 0x55555595bb7a in cmd_fast_import ../builtin/fast-import.c:3614:4 #9 0x555555b1f493 in run_builtin ../git.c:480:11 #10 0x555555b1bfef in handle_builtin ../git.c:740:9 #11 0x555555b1e6f4 in run_argv ../git.c:807:4 #12 0x555555b1b87a in cmd_main ../git.c:947:19 #13 0x5555561649e6 in main ../common-main.c:64:11 #14 0x7ffff742a1fb in __libc_start_call_main (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a1fb) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #15 0x7ffff742a2b8 in __libc_start_main@GLIBC_2.2.5 (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a2b8) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #16 0x555555772c84 in _start (git+0x21ec84) ==577126==Register values: rax = 0x0000511000000cc0 rbx = 0x0000000000000000 rcx = 0x000000000000000c rdx = 0x0000000000000000 rdi = 0x0000000000000000 rsi = 0x00005070000009c0 rbp = 0x00005070000009c0 rsp = 0x00007fffffff5b30 r8 = 0x0000000000000000 r9 = 0x0000000000000000 r10 = 0x0000000000000000 r11 = 0x00007ffff7a01a30 r12 = 0x0000000000000000 r13 = 0x00007fffffff6b60 r14 = 0x00007ffff7ffd000 r15 = 0x00005555563b9910 AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) in EVP_MD_CTX_copy_ex ==577126==ABORTING ./test-lib.sh: line 1039: 577126 Aborted git --git-dir=R/.git fast-import --big-file-threshold=1 < input error: last command exited with $?=134 not ok 167 - R: blob bigger than threshold The segfault is only exposed in case the unsafe and safe backends are different from one another. Fix the issue by initializing the context with the unsafe SHA1 variant. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30bulk-checkin: fix segfault with unsafe SHA1 backendPatrick Steinhardt1-1/+1
In 1b9e9be8b4 (csum-file.c: use unsafe SHA-1 implementation when available, 2024-09-26) we have converted our `struct hashfile` to use the unsafe SHA1 backend, which results in a significant speedup. One needs to be careful with how to use that structure now though because callers need to consistently use either the safe or unsafe variants of SHA1, as otherwise one can easily trigger corruption. As it turns out, we have one inconsistent usage in our tree because we directly initialize `struct hashfile_checkpoint::ctx` with the safe variant of SHA1, but end up writing to that context with the unsafe ones. This went unnoticed so far because our CI systems do not exercise different hash functions for these two backends, and consequently safe and unsafe variants are equivalent. But when using SHA1DC as safe and OpenSSL as unsafe backend this leads to a crash an t1050: ++ git -c core.compression=0 add large1 AddressSanitizer:DEADLYSIGNAL ================================================================= ==1367==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000040 (pc 0x7ffff7a01a99 bp 0x507000000db0 sp 0x7fffffff5690 T0) ==1367==The signal is caused by a READ memory access. ==1367==Hint: address points to the zero page. #0 0x7ffff7a01a99 in EVP_MD_CTX_copy_ex (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) #1 0x555555ddde56 in openssl_SHA1_Clone ../sha1/openssl.h:40:2 #2 0x555555dce2fc in git_hash_sha1_clone_unsafe ../object-file.c:123:2 #3 0x555555c2d5f8 in hashfile_checkpoint ../csum-file.c:211:2 #4 0x555555b9905d in deflate_blob_to_pack ../bulk-checkin.c:286:4 #5 0x555555b98ae9 in index_blob_bulk_checkin ../bulk-checkin.c:362:15 #6 0x555555ddab62 in index_blob_stream ../object-file.c:2756:9 #7 0x555555dda420 in index_fd ../object-file.c:2778:9 #8 0x555555ddad76 in index_path ../object-file.c:2796:7 #9 0x555555e947f3 in add_to_index ../read-cache.c:771:7 #10 0x555555e954a4 in add_file_to_index ../read-cache.c:804:9 #11 0x5555558b5c39 in add_files ../builtin/add.c:355:7 #12 0x5555558b412e in cmd_add ../builtin/add.c:578:18 #13 0x555555b1f493 in run_builtin ../git.c:480:11 #14 0x555555b1bfef in handle_builtin ../git.c:740:9 #15 0x555555b1e6f4 in run_argv ../git.c:807:4 #16 0x555555b1b87a in cmd_main ../git.c:947:19 #17 0x5555561649e6 in main ../common-main.c:64:11 #18 0x7ffff742a1fb in __libc_start_call_main (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a1fb) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #19 0x7ffff742a2b8 in __libc_start_main@GLIBC_2.2.5 (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a2b8) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #20 0x555555772c84 in _start (git+0x21ec84) ==1367==Register values: rax = 0x0000511000001080 rbx = 0x0000000000000000 rcx = 0x000000000000000c rdx = 0x0000000000000000 rdi = 0x0000000000000000 rsi = 0x0000507000000db0 rbp = 0x0000507000000db0 rsp = 0x00007fffffff5690 r8 = 0x0000000000000000 r9 = 0x0000000000000000 r10 = 0x0000000000000000 r11 = 0x00007ffff7a01a30 r12 = 0x0000000000000000 r13 = 0x00007fffffff6b38 r14 = 0x00007ffff7ffd000 r15 = 0x00005555563b9910 AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) in EVP_MD_CTX_copy_ex ==1367==ABORTING ./test-lib.sh: line 1023: 1367 Aborted git $config add large1 error: last command exited with $?=134 not ok 4 - add with -c core.compression=0 Fix the issue by using the unsafe variant instead. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30object-file: fix race in object collision checkPatrick Steinhardt1-2/+4
One of the tests in t5616 asserts that git-fetch(1) with `--refetch` triggers repository maintenance with the correct set of arguments. This test is flaky and causes us to fail sometimes: ++ git -c protocol.version=0 -c gc.autoPackLimit=0 -c maintenance.incremental-repack.auto=1234 -C pc1 fetch --refetch origin error: unable to open .git/objects/pack/pack-029d08823bd8a8eab510ad6ac75c823cfd3ed31e.pack: No such file or directory fatal: unable to rename temporary file to '.git/objects/pack/pack-029d08823bd8a8eab510ad6ac75c823cfd3ed31e.pack' fatal: could not finish pack-objects to repack local links fatal: index-pack failed error: last command exited with $?=128 The error message is quite confusing as it talks about trying to rename a temporary packfile. A first hunch would thus be that this packfile gets written by git-fetch(1), but removed by git-maintenance(1) while it hasn't yet been finalized, which shouldn't ever happen. And indeed, when looking closer one notices that the file that is supposedly of temporary nature does not have the typical `tmp_pack_` prefix. As it turns out, the "unable to rename temporary file" fatal error is a red herring and the real error is "unable to open". That error is raised by `check_collision()`, which is called by `finalize_object_file()` when moving the new packfile into place. Because t5616 re-fetches objects, we end up with the exact same pack as we already have in the repository. So when the concurrent git-maintenance(1) process rewrites the preexisting pack and unlinks it exactly at the point in time where git-fetch(1) wants to check the old and new packfiles for equality we will see ENOENT and thus `check_collision()` returns an error, which gets bubbled up by `finalize_object_file()` and is then handled by `rename_tmp_packfile()`. That function does not know about the exact root cause of the error and instead just claims that the rename has failed. This race is thus caused by b1b8dfde69 (finalize_object_file(): implement collision check, 2024-09-26), where we have newly introduced the collision check. By definition, two files cannot collide with each other when one of them has been removed. We can thus trivially fix the issue by ignoring ENOENT when opening either of the files we're about to check for collision. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30grep: work around LSan threading race with barrierJeff King1-0/+8
There's a race with LSan when spawning threads and one of the threads calls die(). We worked around one such problem with index-pack in the previous commit, but it exists in git-grep, too. You can see it with: make SANITIZE=leak THREAD_BARRIER_PTHREAD=YesOnLinux cd t ./t0003-attributes.sh --stress which fails pretty quickly with: ==git==4096424==ERROR: LeakSanitizer: detected memory leaks Direct leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7f906de14556 in realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98 #1 0x7f906dc9d2c1 in __pthread_getattr_np nptl/pthread_getattr_np.c:180 #2 0x7f906de2500d in __sanitizer::GetThreadStackTopAndBottom(bool, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:150 #3 0x7f906de25187 in __sanitizer::GetThreadStackAndTls(bool, unsigned long*, unsigned long*, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:614 #4 0x7f906de17d18 in __lsan::ThreadStart(unsigned int, unsigned long long, __sanitizer::ThreadType) ../../../../src/libsanitizer/lsan/lsan_posix.cpp:53 #5 0x7f906de143a9 in ThreadStartFunc<false> ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:431 #6 0x7f906dc9bf51 in start_thread nptl/pthread_create.c:447 #7 0x7f906dd1a677 in __clone3 ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 As with the previous commit, we can fix this by inserting a barrier that makes sure all threads have finished their setup before continuing. But there's one twist in this case: the thread which calls die() is not one of the worker threads, but the main thread itself! So we need the main thread to wait in the barrier, too, until all threads have gotten to it. And thus we initialize the barrier for num_threads+1, to account for all of the worker threads plus the main one. If we then test as above, t0003 should run indefinitely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30index-pack: work around LSan threading race with barrierJeff King1-0/+6
We sometimes get false positives from our linux-leaks CI job because of a race in LSan itself. The problem is that one thread is still initializing its stack in LSan's code (and allocating memory to do so) while anothe thread calls die(), taking down the whole process and triggering a leak check. The problem is described in more detail in 993d38a066 (index-pack: spawn threads atomically, 2024-01-05), which tried to fix it by pausing worker threads until all calls to pthread_create() had completed. But that's not enough to fix the problem, because the LSan setup code runs in the threads themselves. So even though pthread_create() has returned, we have no idea if all threads actually finished their setup before letting any of them do real work. We can fix that by using a barrier inside the threads themselves, waiting for all of them to hit the start of their main function before any of them proceed. You can test for the race by running: make SANITIZE=leak THREAD_BARRIER_PTHREAD=YesOnLinux cd t ./t5309-pack-delta-cycles.sh --stress which fails quickly before this patch, and should run indefinitely without it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30thread-utils: introduce optional barrier typeJeff King3-0/+25
One thread primitive we don't yet support is a barrier: it waits for all threads to reach a synchronization point before letting any of them continue. This would be useful for avoiding the LSan race we see in index-pack (and other places) by having all threads complete their initialization before any of them start to do real work. POSIX introduced a pthread_barrier_t in 2004, which does what we want. But if we want to rely on it: 1. Our Windows pthread emulation would need a new set of wrapper functions. There's a Synchronization Barrier primitive there, which was introduced in Windows 8 (which is old enough for us to depend on). 2. macOS (and possibly other systems) has pthreads but not pthread_barrier_t. So there we'd have to implement our own barrier based on the mutex and cond primitives. Those are do-able, but since we only care about avoiding races in our LSan builds, there's an easier way: make it a noop on systems without a native pthread barrier. This patch introduces a "maybe_thread_barrier" API. The clunky name (rather than just using pthread_barrier directly) should hopefully clue people in that on some systems it will do nothing. It's wired to a Makefile knob which has to be triggered manually, and we enable it for the linux-leaks CI jobs (since we know we'll have it there). There are some other possible options: - we could turn it on all the time for Linux systems based on uname. But we really only care about it for LSan builds, and there is no need to add extra code to regular builds. - we could turn it on only for LSan builds. But that would break builds on non-Linux platforms (like macOS) that otherwise should support sanitizers. - we could trigger only on the combination of Linux and LSan together. This isn't too hard to do, but the uname check isn't completely accurate. It is really about what your libc supports, and non-glibc systems might not have it (though at least musl seems to). So we'd risk breaking builds on those systems, which would need to add a new knob. Though the upside would be that running local "make SANITIZE=leak test" would be protected automatically. And of course none of this protects LSan runs from races on systems without pthread barriers. It's probably OK in practice to protect only our CI jobs, though. The race is rare-ish and most leak-checking happens through CI. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30Revert "index-pack: spawn threads atomically"Jeff King1-2/+0
This reverts commit 993d38a0669a8056d496797516e743e26b6b8b54. That commit was trying to solve a race between LSan setting up the threads stack and another thread calling exit(), by making sure that all pthread_create() calls have finished before doing any work that might trigger the exit(). But that isn't sufficient. The setup code actually runs in the individual threads themselves, not in the spawning thread's call to pthread_create(). So while it may have improved the race a bit, you can still trigger it pretty quickly with: make SANITIZE=leak cd t ./t5309-pack-delta-cycles.sh --stress Let's back out that failed attempt so we can try again. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30test-lib: use individual lsan dir for --stress runsJeff King1-1/+1
When storing output in test-results/, we usually give each numbered run in a --stress set its own output file. But we don't do that for storing LSan logs, so something like: ./t0003-attributes.sh --stress will have many scripts simultaneously creating, writing to, and deleting the test-results/t0003-attributes.leak directory. This can cause logs from one run to be attributed to another, spurious failures when creation and deletion race, and so on. This has always been broken, but nobody noticed because it's rare to do a --stress run with LSan (since the point is for the code to run quickly many times in order to hit races). But if you're trying to find a race in the leak sanitizing code, it makes sense to use these together. We can fix it by using $TEST_RESULTS_BASE, which already incorporates the stress job suffix. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-30l10n: sv.po: Update Swedish translationPeter Krefting1-63/+212
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
2024-12-29l10n: fr: v2.48.0Jean-Noël Avila1-88/+243
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
2024-12-28t-reftable-merged: handle realloc errorsRené Scharfe1-2/+2
Check reallocation errors in unit tests, like everywhere else. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-28reftable: handle realloc error in parse_names()René Scharfe1-1/+2
Check the final reallocation for adding the terminating NULL and handle it just like those in the loop. Simply use REFTABLE_ALLOC_GROW instead of keeping the REFTABLE_REALLOC_ARRAY call and adding code to preserve the original pointer value around it. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-28reftable: fix allocation count on realloc errorRené Scharfe3-21/+55
When realloc(3) fails, it returns NULL and keeps the original allocation intact. REFTABLE_ALLOC_GROW overwrites both the original pointer and the allocation count variable in that case, simultaneously leaking the original allocation and misrepresenting the number of storable items. parse_names() avoids the leak by keeping the original pointer if reallocation fails, but still increase the allocation count in such a case as if it succeeded. That's OK, because the error handling code just frees everything and doesn't look at names_cap anymore. reftable_buf_add() does the same, but here it is a problem as it leaves the reftable_buf in a broken state, with ->alloc being roughly twice as big as the actually allocated memory, allowing out-of-bounds writes in subsequent calls. Reimplement REFTABLE_ALLOC_GROW to avoid leaks, keep allocation counts in sync and still signal failures to callers while avoiding code duplication in callers. Make it an expression that evaluates to 0 if no reallocation is needed or it succeeded and 1 on failure while keeping the original pointer and allocation counter values. Adjust REFTABLE_ALLOC_GROW_OR_NULL to the new calling convention for REFTABLE_ALLOC_GROW, but keep its support for non-size_t alloc variables for now. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-28reftable: avoid leaks on realloc errorRené Scharfe7-16/+61
When realloc(3) fails, it returns NULL and keeps the original allocation intact. REFTABLE_ALLOC_GROW overwrites both the original pointer and the allocation count variable in that case, simultaneously leaking the original allocation and misrepresenting the number of storable items. parse_names() and reftable_buf_add() avoid leaking by restoring the original pointer value on failure, but all other callers seem to be OK with losing the old allocation. Add a new variant of the macro, REFTABLE_ALLOC_GROW_OR_NULL, which plugs the leak and zeros the allocation counter. Use it for those callers. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-28l10n: zh_TW: Git 2.48 round 2Yi-Jyun Pan1-54/+98
Co-authored-by: Lumynous <lumynou5.tw@gmail.com> Signed-off-by: Yi-Jyun Pan <pan93412@gmail.com>
2024-12-28l10n: zh_TW: Git 2.48Yi-Jyun Pan1-198/+347
Signed-off-by: Yi-Jyun Pan <pan93412@gmail.com>
2024-12-27l10n: bg.po: Updated Bulgarian translation (5804t)Alexander Shopov1-87/+270
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
2024-12-27sign-compare: avoid comparing ptrdiff with an int/unsignedJunio C Hamano1-1/+1
Instead, offset the base pointer with integer and compare it with the other pointer. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27Documentation: wire up sanity checks for MesonPatrick Steinhardt3-0/+48
Wire up sanity checks for Meson to verify that no man pages are missing. This check is similar to the same check we already have for our tests. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27t/Makefile: make "check-meson" work with DashPatrick Steinhardt2-5/+8
The "check-meson" target uses process substitution to check whether extracted contents from "meson.build" match expected contents. Process substitution is unportable though and thus the target will fail when using for example Dash. Fix this by writing data into a temporary directory. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: install static files for HTML documentationPatrick Steinhardt1-0/+21
Now that we generate man pages, articles and user manual with Meson the only thing that is still missing in an installation of HTML documents is a couple of static files. Wire these up to finalize Meson's support for generating HTML documentation. Diffing an installation that uses our Makefile with an installation that uses Meson only surfaces a couple of discepancies now: - Meson doesn't install "everyday.html" and "git-remote-helpers.html". These files are marked as obsolete and don't contain any useful information anymore: they simply point to their modern equivalents. - Meson doesn't install "*.txt" files when asking for HTML docs. I'm not sure why our Makefiles do this in the first place, and it does seem like the resulting installation is fully functional even without those files. Other than that, both layout and file contents are the exact same. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: generate articlesPatrick Steinhardt3-0/+164
While the Meson build system already knows to generate man pages and our user manual, it does not yet generate the random assortment of articles that we have. Plug this gap. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27Documentation: refactor "howto-index.sh" for out-of-tree buildsPatrick Steinhardt2-3/+3
The "howto-index.sh" is used to generate an index of our how-to docs. It receives as input the paths to these documents, which would typically be relative to the "Documentation/" directory in Makefile-based builds. In an out-of-tree build though it will get relative that may be rooted somewhere else entirely. The file paths do end up in the generated index, and the expectation is that they should always start with "howto/". But for out-of-tree builds we would populate it with the paths relative to the build directory, which is wrong. Fix the issue by using `$(basename "$file")` to generate the path. While at it, move the script into "howto/" to align it with the location of the comparable "api-index.sh" script. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27Documentation: refactor "api-index.sh" for out-of-tree buildsPatrick Steinhardt2-5/+16
The "api-index.sh" script generates an index of API-related documentation. The script does not handle out-of-tree builds and thus cannot be used easily by Meson. Refactor it to be independent of locations by both accepting a source directory where the API docs live as well as a path to an output file. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: generate user manualPatrick Steinhardt1-0/+32
Our documentation contains a user manual that gives people a short introduction to Git. Our Makefile knows to generate the manual into three different formats: an HTML page, a PDF and an info page. The Meson build instructions don't yet generate any of these. While wiring up all these formats I hit a couple of road blocks with how we generate our info pages. Even though I eventually resolved these, it made me question whether anybody actually uses info pages in the first place. Checking through a couple of downstream consumers I couldn't find a single user of either the info pages nor of our PDF manual in Arch Linux, Debian, Fedora, Ubuntu, FreeBSD or OpenBSDFedora. So it's rather safe to assume that there aren't really any users out there, and thus the added complexity does not seem worth it. Wire up support for building the user manual in HTML format and conciously skip over the other two formats. This is basically a form of silent deprecation: if people out there use the other two formats they will eventually complain about them missing in Meson, which means we can wire them up at a later point. If they don't we can phase out these formats eventually. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27Documentation: inline user-manual.confPatrick Steinhardt3-12/+11
When generating our user manual we set up a bit of extra configuration compared to our normal configuration. This is done by having an extra "user-manual.conf" file that Asciidoc seems to pull in automatically due to matching filenames with "user-manual.txt". This dependency is quite hidden though and thus easy to miss. Furthermore, it seems that Asciidoc does not know to pull it in for out-of-tree builds where we use relative paths. The setup in AsciiDoctor is somewhat different: instead of having two sets of configuration, we condition the use of manual-specific configs based on whether the document type is "book". And as we only build our user manual with that type this is sufficient. Use the same trick for our user manual by inlining the configuration into "asciidoc.conf.in" and making it conditional on whether or not "doctype-book" is defined. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: generate HTML pages for all man page categoriesPatrick Steinhardt1-1/+1
When generating HTML pages for our man pages we only generate them for category 1 in Meson, which are the pages corresponding to our built-in commands. I cannot tell why I added this filter though: our Makefile installs all man pages, so a Meson-based build misses out on many of them. Fix this by removing the filter. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: fix generation of merge toolsPatrick Steinhardt1-2/+1
Our buildsystems generate a list of diff and merge tools that ultimately end up in our documentation. And while Meson does wire up the logic, it tries to use the TOOL_MODE environment variable to set up the mode. This is wrong though: the mode is set via an argument that we have fixed to 'diff' mode by accident. Fix this such that merge tools are properly generated. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: properly wire up dependencies for our docsPatrick Steinhardt1-10/+16
A couple of Meson documentation targets use `meson.current_source_dir()` to resolve inputs. This has the downside that it does not automagically make Meson track these inputs as a dependency. After all, string arguments really can be anything, even if they happen to match an actual filesystem path. Adapt these build targets to instead use inputs. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: wire up support for AsciiDoctorPatrick Steinhardt2-28/+84
While our Makefile supports both Asciidoc and AsciiDoctor, our Meson build instructions only support the former. Wire up support for the latter, as well. Our Makefile always favors Asciidoc, but Meson will automatically figure out which of both to use based on whether they are installed or not. To keep compatibility with our Makefile it favors Asciidoc over Asciidoctor in case both are available. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27meson: enable auto-discovered "gitweb"Patrick Steinhardt1-2/+2
In 7d549fe317 (meson: skip gitweb build when Perl is disabled, 2024-12-20) we have started to conditionally enable "gitweb" based on whether or not Perl is enabled. By accident though that change causes us to not build gitweb in case its feature flag is set to "auto" even if autoconfiguration determines that it could be built. This is because we use "gitweb_option.enabled()", which only checks whether the feature has been explicitly enabled. Fix the issue by using `gitweb_option.allowed()` instead, which returns true in case it is either explicitly enabled or set to "auto". This also works for the case where the feature becomes auto-disabled due to Perl not being present because we use `disable_auto_if(not perl.found())`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27GIT-BUILD-OPTIONS: wire up NO_GITWEB optionPatrick Steinhardt6-28/+40
Building our "gitweb" interface is optional in our Makefile and in Meson and not wired up at all with CMake, but disabling it causes a couple of tests in the t950* range that pull in "t/lib-gitweb.sh". This is because the test library knows to execute gitweb-tests based on whether or not Perl is available, but we may have Perl available and still end up not building gitweb e.g. with `make test NO_GITWEB=YesPlease`. Fix this issue by wiring up a new "NO_GITWEB" build option so that we can skip these tests in case gitweb is not built. Note that this new build option requires us to move the configuration of GIT-BUILD-OPTIONS to a later point in our Meson build instructions. But as that file is only consumed by our tests at runtime this change does not cause any issues. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27GIT-BUILD-OPTIONS: sort variables alphabeticallyPatrick Steinhardt3-105/+105
The variables declared and substituted in GIT-BUILD-OPTIONS are not ordered in any obvious way. Sort them alphabetically so that it becomes obvious where new variables should go. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27t7611: replace test -f with test_path_is* helpersMeet Soni1-17/+17
Replace `test -f` and `test ! -f` with `test_path_is_file` and `test_path_is_missing` for better debuggability. While `test -f` ensures that the file exists and is a regular file, `test_path_is_file` provides clearer error messages on failure. On the other hand, `test ! -f` checks either the absence of a regular file or the presence of any other filesystem object, but looking at them in the test individually, all of them should've said `test ! -e`, i.e. "there shouldn't be anything at given path on filesystem." Replace these cases with `test_path_is_missing` for better debuggability. Helped-by: karthik nayak <karthik.188@gmail.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Meet Soni <meetsoni3017@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27commit-reach: use `size_t` to track indices when computing merge basesPatrick Steinhardt5-12/+11
The functions `repo_get_merge_bases_many()` and friends accepts an array of commits as well as a parameter that indicates how large that array is. This parameter is using a signed integer, which leads to a couple of warnings with -Wsign-compare. Refactor the code to use `size_t` to track indices instead and adapt callers accordingly. While most callers are trivial, there are two callers that require a bit more scrutiny: - builtin/merge-base.c:show_merge_base() subtracts `1` from the `rev_nr` before calling `repo_get_merge_bases_many_dirty()`, so if the variable was `0` it would wrap. This code is fine though because its only caller will execute that code only when `argc >= 2`, and it follows that `rev_nr >= 2`, as well. - bisect.ccheck_merge_bases() similarly subtracts `1` from `rev_nr`. Again, there is only a single caller that populates `rev_nr` with `good_revs.nr`. And because a bisection always requires at least one good revision it follws that `rev_nr >= 1`. Mark the file as -Wsign-compare-clean. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27shallow: fix -Wsign-compare warningsPatrick Steinhardt2-23/+21
Fix a couple of -Wsign-compare issues in "shallow.c" and mark the file as -Wsign-compare-clean. This change prepares the code for a refactoring of `repo_in_merge_bases_many()`, which will be adapted to accept the number of commits as `size_t` instead of `int`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27builtin/log: fix remaining -Wsign-compare warningsPatrick Steinhardt1-14/+13
Fix remaining -Wsign-compare warnings in "builtin/log.c" and mark the file as -Wsign-compare-clean. While most of the fixes are obvious, one fix requires us to use `cast_size_t_to_int()`, which will cause us to die in case the `size_t` cannot be represented as `int`. This should be fine though, as the data would typically be set either via a config key or via the command line, neither of which should ever exceed a couple of kilobytes of data. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27builtin/log: use `size_t` to track indicesPatrick Steinhardt1-10/+13
Similar as with the preceding commit, adapt "builtin/log.c" so that it tracks array indices via `size_t` instead of using signed integers. This fixes a couple of -Wsign-compare warnings and prepares the code for a similar refactoring of `repo_get_merge_bases_many()` in a subsequent commit. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-27commit-reach: use `size_t` to track indices in `get_reachable_subset()`Patrick Steinhardt7-16/+17
Similar as with the preceding commit, adapt `get_reachable_subset()` so that it tracks array indices via `size_t` instead of using signed integers to fix a couple of -Wsign-compare warnings. Adapt callers accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>