| Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Translate 36 new messages (4694t0f0u) for git 2.24.0.
Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
|
|
As per Wikipedia, "In current technical usage, for one to state that a
feature is deprecated is merely a recommendation against using it." It
is thus contradictory to claim that something is not "officially
deprecated" and then to immediately state that we are both discouraging
its use and pointing people elsewhere.
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We recently regressed our rendering with Asciidoctor of "literal"
elements in our manpages, i.e, stuff we have placed within `backticks`
in order to render as monospace. In particular, we lost the bold
rendering of such literal text.
The culprit is f6461b82b9 ("Documentation: fix build with Asciidoctor 2",
2019-09-15), where we switched from DocBook 4.5 to DocBook 5 with
Asciidoctor. As part of the switch, we started using the namespaced
DocBook XSLT stylesheets rather than the non-namespaced ones. (See
f6461b82b9 for more details on why we changed to the namespaced ones.)
The bold literals are implemented as an XSLT snippet <xsl:template
match="literal">...</xsl:template>. Now that we use namespaces, this
doesn't pick up our literals like it used to.
Match for "d:literal" in addition to just "literal", after defining the
d namespace. ("d" is what
http://docbook.sourceforge.net/release/xsl-ns/current/manpages/docbook.xsl
uses.) Note that we need to keep matching without the namespace for
AsciiDoc.
This boldness was introduced by 5121a6d993 ("Documentation: option to
render literal text as bold for manpages", 2009-03-27) and made the
default in 5945717009 ("Documentation: bold literals in man",
2016-05-31).
One reason this was not caught in review is that our doc-diff tool diffs
without any boldness, i.e., it "only" compares text. As pointed out by
Peff in review of this patch, one can use `MAN_KEEP_FORMATTING=1
./doc-diff <...>`
This has been optically tested with AsciiDoc 8.6.10, Asciidoctor 1.5.5
and Asciidoctor 2.0.10. I've also verified that doc-diff produces the
empty output for all three programs, as expected, and that with the
MAN_KEEP_FORMATTING trick, AsciiDoc yields no diff, whereas with
Asciidoctor, we get bold literals, just like we want.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Acked-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Matthias Rüster <matthias.ruester@gmail.com>
Reviewed-by: Ralf Thielow <ralf.thielow@gmail.com>
Reviewed-by: Phillip Szelat <phillip.szelat@gmail.com>
|
|
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The comments for the staging/unstaging test did not accurately
describe the scenario being tested. It is not essential that
the test files being staged/unstaged appear at the end of the
index. All that is required is that the test files are not
flagged with CE_FSMONITOR_VALID and have a position in the
index greater than the number of entries in the index after
unstaging.
The comment for this test has been updated to be more
accurate with respect to the scenario that's being tested.
Signed-off-by: William Baker <William.Baker@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
|
|
Signed-off-by: Tran Ngoc Quan <vnwildman@gmail.com>
|
|
Signed-off-by: Christopher Diaz Riveros <christopher.diaz.riv@gmail.com>
|
|
Signed-off-by: Alessandro Menti <alessandro.menti@alessandromenti.it>
|
|
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
|
|
Generate po/git.pot from v2.24.0-rc1 for git v2.24.0 l10n round 2.
Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
|
|
When this function is passed a path with a trailing slash, it runs right
over the end of that path.
Let's fix this.
Co-authored-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Without this change, the setting
$feature{'javascript-actions'}{'default'} = [1];
in gitweb.conf breaks gitweb's blame page: clicking on line numbers
displayed in the second column on the page has no effect.
For comparison, with javascript-actions disabled, clicking on line
numbers loads the previous version of the line.
Addresses https://bugs.debian.org/741883.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Robert Luberda <robert@debian.org>
Acked-by: Jakub Narębski <jnareb@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The previous commit includes a failing test for an issue around
fetch.writeCommitGraph and fetching in a repo with a submodule. Here, we
fix that bug and set the test to "test_expect_success".
The problem arises with this set of commands when the remote repo at
<url> has a submodule. Note that --recurse-submodules is not needed to
demonstrate the bug.
$ git clone <url> test
$ cd test
$ git -c fetch.writeCommitGraph=true fetch origin
Computing commit graph generation numbers: 100% (12/12), done.
BUG: commit-graph.c:886: missing parent <hash1> for commit <hash2>
Aborted (core dumped)
As an initial fix, I converted the code in builtin/fetch.c that calls
write_commit_graph_reachable() to instead launch a "git commit-graph
write --reachable --split" process. That code worked, but is not how we
want the feature to work long-term.
That test did demonstrate that the issue must be something to do with
internal state of the 'git fetch' process.
The write_commit_graph() method in commit-graph.c ensures the commits we
plan to write are "closed under reachability" using close_reachable().
This method walks from the input commits, and uses the UNINTERESTING
flag to mark which commits have already been visited. This allows the
walk to take O(N) time, where N is the number of commits, instead of
O(P) time, where P is the number of paths. (The number of paths can be
exponential in the number of commits.)
However, the UNINTERESTING flag is used in lots of places in the
codebase. This flag usually means some barrier to stop a commit walk,
such as in revision-walking to compare histories. It is not often
cleared after the walk completes because the starting points of those
walks do not have the UNINTERESTING flag, and clear_commit_marks() would
stop immediately.
This is happening during a 'git fetch' call with a remote. The fetch
negotiation is comparing the remote refs with the local refs and marking
some commits as UNINTERESTING.
I tested running clear_commit_marks_many() to clear the UNINTERESTING
flag inside close_reachable(), but the tips did not have the flag, so
that did nothing.
It turns out that the calculate_changed_submodule_paths() method is at
fault. Thanks, Peff, for pointing out this detail! More specifically,
for each submodule, the collect_changed_submodules() runs a revision
walk to essentially do file-history on the list of submodules. That
revision walk marks commits UNININTERESTING if they are simplified away
by not changing the submodule.
Instead, I finally arrived on the conclusion that I should use a flag
that is not used in any other part of the code. In commit-reach.c, a
number of flags were defined for commit walk algorithms. The REACHABLE
flag seemed like it made the most sense, and it seems it was not
actually used in the file. The REACHABLE flag was used in early versions
of commit-reach.c, but was removed by 4fbcca4 (commit-reach: make
can_all_from_reach... linear, 2018-07-20).
Add the REACHABLE flag to commit-graph.c and use it instead of
UNINTERESTING in close_reachable(). This fixes the bug in manual
testing.
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Jeff King <peff@peff.net>
Helped-by: Szeder Gábor <szeder.dev@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While dogfooding, Johannes found a bug in the fetch.writeCommitGraph
config behavior. His example initially happened during a clone with
--recurse-submodules, we found that this happens with the first fetch
after cloning a repository that contains a submodule:
$ git clone <url> test
$ cd test
$ git -c fetch.writeCommitGraph=true fetch origin
Computing commit graph generation numbers: 100% (12/12), done.
BUG: commit-graph.c:886: missing parent <hash1> for commit <hash2>
Aborted (core dumped)
In the repo I had cloned, there were really 60 commits to scan, but
only 12 were in the list to write when calling
compute_generation_numbers(). A commit in the list expects to see a
parent, but that parent is not in the list.
A follow-up will fix the bug, but first we create a test that
demonstrates the problem. This test must be careful about an existing
commit-graph file, since GIT_TEST_COMMIT_GRAPH=1 will cause the repo we
are cloning to already have one. This then prevents the incremtnal
commit-graph write during the first 'git fetch'.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Szeder Gábor <szeder.dev@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Suppose, from a repository that has ".gitmodules", we clone with
--filter=blob:none:
git clone --filter=blob:none --no-checkout \
https://kernel.googlesource.com/pub/scm/git/git
Then we fetch:
git -C git fetch
This will cause a "unable to load config blob object", because the
fetch_config_from_gitmodules() invocation in cmd_fetch() will attempt to
load ".gitmodules" (which Git knows to exist because the client has the
tree of HEAD) while fetch_if_missing is set to 0.
fetch_if_missing is set to 0 too early - ".gitmodules" here should be
lazily fetched. Git must set fetch_if_missing to 0 before the fetch
because as part of the fetch, packfile negotiation happens (and we do
not want to fetch any missing objects when checking existence of
objects), but we do not need to set it so early. Move the setting of
fetch_if_missing to the earliest possible point in cmd_fetch(), right
before any fetching happens.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Several config options were combined into a repo_settings struct in
ds/feature-macros, including a move of the "index.version" config
setting in 7211b9e (repo-settings: consolidate some config settings,
2019-08-13).
Unfortunately, that file looked like a lot of boilerplate and what is
clearly a factor of copy-paste overload, the config setting is parsed
with repo_config_ge_bool() instead of repo_config_get_int(). This means
that a setting "index.version=4" would not register correctly and would
revert to the default version of 3.
I caught this while incorporating v2.24.0-rc0 into the VFS for Git
codebase, where we really care that the index is in version 4.
This was not caught by the codebase because the version checks placed
in t1600-index.sh did not test the "basic" scenario enough. Here, we
modify the test to include these normal settings to not be overridden by
features.manyFiles or GIT_INDEX_VERSION. While the "default" version is
3, this is demoted to version 2 in do_write_index() when not necessary.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A few days ago Travis CI updated their existing OSX images, including
the Homebrew database in the xcode10.1 OSX image that we use. Since
then installing dependencies in the 'osx-gcc' job fails when it tries
to link gcc@8:
+ brew link gcc@8
Error: No such keg: /usr/local/Cellar/gcc@8
GCC8 is still installed but not linked to '/usr/local' in the updated
image, as it was before this update, but now we have to link it by
running 'brew link gcc'. So let's do that then, and fall back to
linking gcc@8 if it doesn't, just to be sure.
Our builds on Azure Pipelines are unaffected by this issue. The OSX
image over there doesn't contain the gcc@8 package, so we have to
'brew install' it, which already takes care of linking it to
'/usr/local'. After that the 'brew link gcc' command added by this
patch fails, but the ||-chained fallback 'brew link gcc@8' command
succeeds with an "already linked" warning.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The tenth was at -rc0 ;-)
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The Azure Pipelines builds are failing for macOS due to a change in the
location of the perforce cask. The command outputs the following error:
+ brew install caskroom/cask/perforce
Error: caskroom/cask was moved. Tap homebrew/cask-cask instead.
So let's try to call `brew cask install perforce` first (which is what
that error message suggests, in a most round-about way).
Prior to 672f51cb we used to install the 'perforce' package with 'brew
install perforce' (note: no 'cask' in there). The justification for
672f51cb was that the command 'brew install perforce' simply stopped
working, after Homebrew folks decided that it's better to move the
'perforce' package to a "cask". Their justification for this move was
that 'brew install perforce' "can fail due to a checksum mismatch ...",
and casks can be installed without checksum verification. And indeed,
both 'brew cask install perforce' and 'brew install
caskroom/cask/perforce' printed something along the lines of:
==> No checksum defined for Cask perforce, skipping verification
It is unclear why 672f51cb used 'brew install caskroom/cask/perforce'
instead of 'brew cask install perforce'. It appears (by running both
commands on old Travis CI macOS images) that both commands worked all
the same already back then.
In any case, as the error message at the top of this commit message
shows, 'brew install caskroom/cask/perforce' has stopped working
recently, but 'brew cask install perforce' still does, so let's use
that.
CI servers are typically fresh virtual machines, but not always. To
accommodate for that, let's try harder if `brew cask install perforce`
fails, by specifically pulling the latest `master` of the
`homebrew-cask` repository.
This will still fail, of course, when `homebrew-cask` falls behind
Perforce's release schedule. But once it is updated, we can now simply
re-run the failed jobs and they will pick up that update.
As for updating `homebrew-cask`: the beginnings of automating this in
https://dev.azure.com/gitgitgadget/git/_build?definitionId=11&_a=summary
will be finished once the next Perforce upgrade comes around.
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
According to t/README, test_must_fail() should only be used to test for
failure in Git commands. Replace the invocations of
`test_must_fail grep` with `! grep`.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As noted by Gábor in [1], the new tests in edefc31873 ("format-patch:
create leading components of output directory", 2019-10-11) cannot be
run independently. Fix this.
[1] https://public-inbox.org/git/20191011144650.GM29845@szeder.dev/
Signed-off-by: Bert Wesarg <bert.wesarg@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Originally, the CI/PR builds that build and test using Visual Studio
were implemented imitating `linux-clang`, i.e. still using the
`Makefile`-based build infrastructure.
Later (but still before the patches made their way into git.git's
`master`), however, this was changed to generate Visual Studio project
files and build the binaries using `MSBuild`, as this reflects more
accurately how Visual Studio users would want to build Git (internally,
Visual Studio uses `MSBuild`, or at least something very similar).
During that transition, we needed to implement a new way to run the test
suite in parallel, as Visual Studio users typically will only have a Git
Bash available (which does not ship with `make` nor with support for
`prove`): we simply implemented a new test helper to run the test suite.
This helper even knows how to run the tests in parallel, but due to a
mistake on this developer's part, it was never turned on in the CI/PR
builds. This results in 2x-3x longer run times of the test phase.
Let's use the `--jobs=10` option to fix this.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To make full use of the work that went into the Visual Studio build &
test jobs in our CI/PR builds, let's turn on strict compiler flags. This
will give us the benefit of Visual C's compiler warnings (which, at
times, seem to catch things that GCC does not catch, and vice versa).
While at it, also turn on optimization; It does not make sense to
produce binaries with debug information, and we can use any ounce of
speed that we get (because the test suite is particularly slow on
Windows, thanks to the need to run inside a Unix shell, which
requires us to use the POSIX emulation layer provided by MSYS2).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Alessandro Menti <alessandro.menti@alessandromenti.it>
|
|
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
|
|
Generate po/git.pot from v2.24.0-rc0 for git v2.24.0 l10n round 1.
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
|
|
While reviewing some dts diffs recently I noticed that the hunk header
logic was failing to find the containing node. This is because the regex
doesn't consider properties that may span multiple lines, i.e.
property = <something>,
<something_else>;
and it got hung up on comments inside nodes that look like the root node
because they start with '/*'. Add tests for these cases and update the
regex to find them. Maybe detecting the root node is too complicated but
forcing it to be a backslash with any amount of whitespace up to an open
bracket seemed OK. I tried to detect that a comment is in-between the
two parts but I wasn't happy so I just dropped it.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Reviewed-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 't0500-progress-display.sh' all tests running 'test-tool progress
--total=<N>' fail on big-endian systems, e.g. like this:
+ test-tool progress --total=3 Working hard
[...]
+ test_i18ncmp expect out
--- expect 2019-10-18 23:07:54.765523916 +0000
+++ out 2019-10-18 23:07:54.773523916 +0000
@@ -1,4 +1,2 @@
-Working hard: 33% (1/3)<CR>
-Working hard: 66% (2/3)<CR>
-Working hard: 100% (3/3)<CR>
-Working hard: 100% (3/3), done.
+Working hard: 0% (1/12884901888)<CR>
+Working hard: 0% (3/12884901888), done.
The reason for that bogus value is that '--total's parameter is parsed
via parse-options's OPT_INTEGER into a uint64_t variable [1], so the
two bits of 3 end up in the "wrong" bytes on big-endian systems
(12884901888 = 0x300000000).
Change the type of that variable from uint64_t to int, to match what
parse-options expects; in the tests of the progress output we won't
use values that don't fit into an int anyway.
[1] start_progress() expects the total number as an uint64_t, that's
why I chose the same type when declaring the variable holding the
value given on the command line.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
[jpag: Debian unstable/ppc64 (big-endian)]
Tested-By: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
[tz: Fedora s390x (big-endian)]
Tested-By: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Synced with 2.24-rc0
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
|
|
The original comment does not describe type of ~/.zsh/_git explicitly
and zsh does not warn or fail if a user create it as a dictionary.
So unexperienced users could be misled by the original comment.
There is a small update to clarify it.
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Maxim Belsky <public.belsky@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
94da9193a6 ("grep: add support for PCRE v2", 2017-06-01) introduced
a small memory leak visible with valgrind in t7813.
Complete the creation of a PCRE2 specific variable that was missing from
the original change and free the generated table just like it is done
for PCRE1.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
94da9193a6 (grep: add support for PCRE v2, 2017-06-01) didn't include
a way to override the system allocator, and so it is incompatible with
custom allocators (e.g. nedmalloc). This problem became obvious when we
tried to plug a memory leak by `free()`ing a data structure allocated by
PCRE2, triggering a segfault in Windows (where we use nedmalloc by
default).
PCRE2 requires the use of a general context to override the allocator
and therefore, there is a lot more code needed than in PCRE1, including
a couple of wrapper functions.
Extend the grep API with a "destructor" that could be called to cleanup
any objects that were created and used globally.
Update `builtin/grep.c` to use that new API, but any other future users
should make sure to have matching `grep_init()`/`grep_destroy()` calls
if they are using the pattern matching functionality.
Move some of the logic that was before done per thread (in the workers)
into an earlier phase to avoid degrading performance, but as the use of
PCRE2 with custom allocators is better understood it is expected more of
its functions will be instructed to use the custom allocator as well as
was done in the original code[1] this work was based on.
[1] https://public-inbox.org/git/3397e6797f872aedd18c6d795f4976e1c579514b.1565005867.git.gitgitgadget@gmail.com/
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09) didn't include a way
to override the system alocator, and so it is incompatible with
USE_NED_ALLOCATOR as reported by Dscho[1] (in similar code from PCRE2)
Note that nedmalloc, as well as other custom allocators like jemalloc
and mi-malloc, can be configured at runtime (via `LD_PRELOAD`),
therefore we cannot know at compile time whether a custom allocator is
used or not.
Make the minimum change possible to ensure this combination is supported
by extending `grep_init()` to set the PCRE1 specific functions to Git's
idea of `malloc()` and `free()` and therefore making sure all
allocations are done inside PCRE1 with the same allocator than the rest
of Git.
This change has negligible performance impact: PCRE needs to allocate
memory once per program run for the character table and for each pattern
compilation. These are both rare events compared to matching patterns
against lines. Actual measurements[2] show that the impact is lost in
the noise.
[1] https://public-inbox.org/git/pull.306.git.gitgitgadget@gmail.com
[2] https://public-inbox.org/git/7f42007f-911b-c570-17f6-1c6af0429586@web.de
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When pushing more than one reference with the --atomic option, the
server is supposed to perform a single atomic transaction to update the
references, leaving them either all to succeed or all to fail. This
works fine when pushing locally or over SSH, but when pushing over HTTP,
we fail to pass the atomic capability to the remote side. In fact, we
have not reported this capability to any remote helpers during the life
of the feature.
Now normally, things happen to work nevertheless, since we actually
check for most types of failures, such as non-fast-forward updates, on
the client side, and just abort the entire attempt. However, if the
server side reports a problem, such as the inability to lock a ref, the
transaction isn't atomic, because we haven't passed the appropriate
capability over and the remote side has no way of knowing that we wanted
atomic behavior.
Fix this by passing the option from the transport code through to remote
helpers, and from the HTTP remote helper down to send-pack. With this
change, we can detect if the server side rejects the push and report
back appropriately. Note the difference in the messages: the remote
side reports "atomic transaction failed", while our own checking rejects
pushes with the message "atomic push failed".
Document the atomic option in the remote helper documentation, so other
implementers can implement it if they like.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This changes the indent from
"<tab><sp><sp><sp><sp><sp><sp><sp><sp>"
to
"<tab><tab>"
so that the statement lines up with the rest of the block.
Signed-off-by: Norman Rasmussen <norman@rasmussen.co.za>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Use argv_array to build an array of strings instead of open-coding it.
This simplifies the code a bit.
We also need to make the specs parameter of push(), push_dav() and
push_git() const to match the argv member of the argv_array. That's
fine, as all three only actually read from the specs array anyway.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Make use of utf8_strnwidth()'s feature to skip ANSI escape sequences
instead of open-coding it. This shortens the code and makes it more
consistent.
This changes the behavior, though: The old code skips all kinds of
Control Sequence Introducer sequences, while utf8_strnwidth() only skips
the Select Graphic Rendition kind, i.e. those ending with "m". They are
used for specifying color and font attributes like boldness. The only
other kind of escape sequence we print in Git is Erase in Line, ending
with "K". That's not used for columnar output, so this difference
actually doesn't matter here.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The first step for deleting an item from a linked list is to locate the
item preceding it. Be more careful in release_request() and handle an
empty list. This only has consequences for invalid delete requests
(removing the same item twice, or deleting an item that was never added
to the list), but simplifies the loop condition as well as the check
after the loop.
Once we found the item's predecessor in the list, update its next
pointer to skip over the item, which removes it from the list. In other
words: Make the item's successor the successor of its predecessor.
(At this point entry->next == request and prev->next == lock,
respectively.) This is a bit simpler and saves a pointer dereference.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
git stash push does not recursively stash submodules, but if
submodule.recurse is set, it may recursively reset --hard them. Having
only the destructive action recurse is likely to be surprising
behaviour, and unlikely to be desirable, so the easiest fix should be to
ensure that the call to git reset --hard never recurses into submodules.
This matches the behavior of check_changes_tracked_files, which ignores
submodules.
Signed-off-by: Jakob Jarmar <jakob@jarmar.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: kdnakt <a.kid.1985@gmail.com>
Signed-off-by: Pratyush Yadav <me@yadavpratyush.com>
|