summaryrefslogtreecommitdiffstats
path: root/checkout.h (unfollow)
Commit message (Collapse)AuthorFilesLines
2018-05-31l10n: git.pot: v2.18.0 round 1 (108 new, 14 removed)Jiang Xin1-2259/+2739
Generate po/git.pot from v2.18.0-rc0 for git v2.18.0 l10n round 1. Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
2018-05-30Git 2.18-rc0v2.18.0-rc0Junio C Hamano2-1/+37
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-30The seventh batch for 2.18Junio C Hamano1-0/+80
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-23The sixth batch for 2.18Junio C Hamano1-0/+101
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-23remote-curl: accept compressed responses with protocol v2Brandon Williams1-0/+1
Configure curl to accept compressed responses when using protocol v2 by setting `CURLOPT_ENCODING` to "", which indicates that curl should send an "Accept-Encoding" header with all supported compression encodings. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-23remote-curl: accept all encodings supported by curlBrandon Williams3-6/+11
Configure curl to accept all encodings which curl supports instead of only accepting gzip responses. This fixes an issue when using an installation of curl which is built without the "zlib" feature. Since aa90b9697 (Enable info/refs gzip decompression in HTTP client, 2012-09-19) we end up requesting "gzip" encoding anyway despite libcurl not being able to decode it. Worse, instead of getting a clear error message indicating so, we end up falling back to "dumb" http, producing a confusing and difficult to debug result. Since curl doesn't do any checking to verify that it supports the a requested encoding, instead set the curl option `CURLOPT_ENCODING` with an empty string indicating that curl should send an "Accept-Encoding" header containing only the encodings supported by curl. Reported-by: Anton Golubev <anton.golubev@gmail.com> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22Git 2.17.1v2.17.1Junio C Hamano3-2/+18
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22Git 2.16.4v2.16.4Junio C Hamano3-2/+7
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22Git 2.15.2v2.15.2Junio C Hamano2-1/+4
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22Git 2.14.4v2.14.4Junio C Hamano3-2/+7
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22Git 2.13.7v2.13.7Junio C Hamano3-2/+22
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22fsck: complain when .gitmodules is a symlinkJeff King2-2/+38
We've recently forbidden .gitmodules to be a symlink in verify_path(). And it's an easy way to circumvent our fsck checks for .gitmodules content. So let's complain when we see it. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22index-pack: check .gitmodules files with --strictJeff King3-0/+60
Now that the internal fsck code has all of the plumbing we need, we can start checking incoming .gitmodules files. Naively, it seems like we would just need to add a call to fsck_finish() after we've processed all of the objects. And that would be enough to cover the initial test included here. But there are two extra bits: 1. We currently don't bother calling fsck_object() at all for blobs, since it has traditionally been a noop. We'd actually catch these blobs in fsck_finish() at the end, but it's more efficient to check them when we already have the object loaded in memory. 2. The second pass done by fsck_finish() needs to access the objects, but we're actually indexing the pack in this process. In theory we could give the fsck code a special callback for accessing the in-pack data, but it's actually quite tricky: a. We don't have an internal efficient index mapping oids to packfile offsets. We only generate it on the fly as part of writing out the .idx file. b. We'd still have to reconstruct deltas, which means we'd basically have to replicate all of the reading logic in packfile.c. Instead, let's avoid running fsck_finish() until after we've written out the .idx file, and then just add it to our internal packed_git list. This does mean that the objects are "in the repository" before we finish our fsck checks. But unpack-objects already exhibits this same behavior, and it's an acceptable tradeoff here for the same reason: the quarantine mechanism means that pushes will be fully protected. In addition to a basic push test in t7415, we add a sneaky pack that reverses the usual object order in the pack, requiring that index-pack access the tree and blob during the "finish" step. This already works for unpack-objects (since it will have written out loose objects), but we'll check it with this sneaky pack for good measure. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22unpack-objects: call fsck_finish() after fscking objectsJeff King2-1/+11
As with the previous commit, we must call fsck's "finish" function in order to catch any queued objects for .gitmodules checks. This second pass will be able to access any incoming objects, because we will have exploded them to loose objects by now. This isn't quite ideal, because it means that bad objects may have been written to the object database (and a subsequent operation could then reference them, even if the other side doesn't send the objects again). However, this is sufficient when used with receive.fsckObjects, since those loose objects will all be placed in a temporary quarantine area that will get wiped if we find any problems. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22fsck: call fsck_finish() after fscking objectsJeff King2-0/+7
Now that the internal fsck code is capable of checking .gitmodules files, we just need to teach its callers to use the "finish" function to check any queued objects. With this, we can now catch the malicious case in t7415 with git-fsck. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22fsck: check .gitmodules contentJeff King1-1/+59
This patch detects and blocks submodule names which do not match the policy set forth in submodule-config. These should already be caught by the submodule code itself, but putting the check here means that newer versions of Git can protect older ones from malicious entries (e.g., a server with receive.fsckObjects will block the objects, protecting clients which fetch from it). As a side effect, this means fsck will also complain about .gitmodules files that cannot be parsed (or were larger than core.bigFileThreshold). Signed-off-by: Jeff King <peff@peff.net>
2018-05-22fsck: handle promisor objects in .gitmodules checkJeff King1-0/+3
If we have a tree that points to a .gitmodules blob but don't have that blob, we can't check its contents. This produces an fsck error when we encounter it. But in the case of a promisor object, this absence is expected, and we must not complain. Note that this can technically circumvent our transfer.fsckObjects check. Imagine a client fetches a tree, but not the matching .gitmodules blob. An fsck of the incoming objects will show that we don't have enough information. Later, we do fetch the actual blob. But we have no idea that it's a .gitmodules file. The only ways to get around this would be to re-scan all of the existing trees whenever new ones enter (which is expensive), or to somehow persist the gitmodules_found set between fsck runs (which is complicated). In practice, it's probably OK to ignore the problem. Any repository which has all of the objects (including the one serving the promisor packs) can perform the checks. Since promisor packs are inherently about a hierarchical topology in which clients rely on upstream repositories, those upstream repositories can protect all of their downstream clients from broken objects. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22fsck: detect gitmodules filesJeff King2-0/+65
In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22fsck: actually fsck blob dataJeff King3-24/+28
Because fscking a blob has always been a noop, we didn't bother passing around the blob data. In preparation for content-level checks, let's fix up a few things: 1. The fsck_object() function just returns success for any blob. Let's a noop fsck_blob(), which we can fill in with actual logic later. 2. The fsck_loose() function in builtin/fsck.c just threw away blob content after loading it. Let's hold onto it until after we've called fsck_object(). The easiest way to do this is to just drop the parse_loose_object() helper entirely. Incidentally, this also fixes a memory leak: if we successfully loaded the object data but did not parse it, we would have left the function without freeing it. 3. When fsck_loose() loads the object data, it does so with a custom read_loose_object() helper. This function streams any blobs, regardless of size, under the assumption that we're only checking the sha1. Instead, let's actually load blobs smaller than big_file_threshold, as the normal object-reading code-paths would do. This lets us fsck small files, and a NULL return is an indication that the blob was so big that it needed to be streamed, and we can pass that information along to fsck_blob(). Signed-off-by: Jeff King <peff@peff.net>
2018-05-22fsck: simplify ".git" checkJeff King1-3/+1
There's no need for us to manually check for ".git"; it's a subset of the other filesystem-specific tests. Dropping it makes our code slightly shorter. More importantly, the existing code may make a reader wonder why ".GIT" is not covered here, and whether that is a bug (it isn't, as it's also covered in the filesystem-specific tests). Signed-off-by: Jeff King <peff@peff.net>
2018-05-22index-pack: make fsck error message more specificJeff King2-2/+2
If fsck reports an error, we say only "Error in object". This isn't quite as bad as it might seem, since the fsck code would have dumped some errors to stderr already. But it might help to give a little more context. The earlier output would not have even mentioned "fsck", and that may be a clue that the "fsck.*" or "*.fsckObjects" config may be relevant. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22verify_path: disallow symlinks in .gitmodulesJeff King4-15/+37
There are a few reasons it's not a good idea to make .gitmodules a symlink, including: 1. It won't be portable to systems without symlinks. 2. It may behave inconsistently, since Git may look at this file in the index or a tree without bothering to resolve any symbolic links. We don't do this _yet_, but the config infrastructure is there and it's planned for the future. With some clever code, we could make (2) work. And some people may not care about (1) if they only work on one platform. But there are a few security reasons to simply disallow it: a. A symlinked .gitmodules file may circumvent any fsck checks of the content. b. Git may read and write from the on-disk file without sanity checking the symlink target. So for example, if you link ".gitmodules" to "../oops" and run "git submodule add", we'll write to the file "oops" outside the repository. Again, both of those are problems that _could_ be solved with sufficient code, but given the complications in (1) and (2), we're better off just outlawing it explicitly. Note the slightly tricky call to verify_path() in update-index's update_one(). There we may not have a mode if we're not updating from the filesystem (e.g., we might just be removing the file). Passing "0" as the mode there works fine; since it's not a symlink, we'll just skip the extra checks. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22update-index: stat updated files earlierJeff King1-8/+17
In the update_one(), we check verify_path() on the proposed path before doing anything else. In preparation for having verify_path() look at the file mode, let's stat the file earlier, so we can check the mode accurately. This is made a bit trickier by the fact that this function only does an lstat in a few code paths (the ones that flow down through process_path()). So we can speculatively do the lstat() here and pass the results down, and just use a dummy mode for cases where we won't actually be updating the index from the filesystem. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22verify_dotfile: mention case-insensitivity in commentJeff King1-1/+4
We're more restrictive than we need to be in matching ".GIT" on case-sensitive filesystems; let's make a note that this is intentional. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22verify_path: drop clever fallthroughJeff King1-4/+4
We check ".git" and ".." in the same switch statement, and fall through the cases to share the end-of-component check. While this saves us a line or two, it makes modifying the function much harder. Let's just write it out. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22skip_prefix: add case-insensitive variantJeff King1-0/+17
We have the convenient skip_prefix() helper, but if you want to do case-insensitive matching, you're stuck doing it by hand. We could add an extra parameter to the function to let callers ask for this, but the function is small and somewhat performance-critical. Let's just re-implement it for the case-insensitive version. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22is_{hfs,ntfs}_dotgitmodules: add testsJohannes Schindelin2-0/+106
This tests primarily for NTFS issues, but also adds one example of an HFS+ issue. Thanks go to Congyi Wu for coming up with the list of examples where NTFS would possibly equate the filename with `.gitmodules`. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Jeff King <peff@peff.net>
2018-05-22is_ntfs_dotgit: match other .git filesJohannes Schindelin2-1/+93
When we started to catch NTFS short names that clash with .git, we only looked for GIT~1. This is sufficient because we only ever clone into an empty directory, so .git is guaranteed to be the first subdirectory or file in that directory. However, even with a fresh clone, .gitmodules is *not* necessarily the first file to be written that would want the NTFS short name GITMOD~1: a malicious repository can add .gitmodul0000 and friends, which sorts before `.gitmodules` and is therefore checked out *first*. For that reason, we have to test not only for ~1 short names, but for others, too. It's hard to just adapt the existing checks in is_ntfs_dotgit(): since Windows 2000 (i.e., in all Windows versions still supported by Git), NTFS short names are only generated in the <prefix>~<number> form up to number 4. After that, a *different* prefix is used, calculated from the long file name using an undocumented, but stable algorithm. For example, the short name of .gitmodules would be GITMOD~1, but if it is taken, and all of ~2, ~3 and ~4 are taken, too, the short name GI7EBA~1 will be used. From there, collisions are handled by incrementing the number, shortening the prefix as needed (until ~9999999 is reached, in which case NTFS will not allow the file to be created). We'd also want to handle .gitignore and .gitattributes, which suffer from a similar problem, using the fall-back short names GI250A~1 and GI7D29~1, respectively. To accommodate for that, we could reimplement the hashing algorithm, but it is just safer and simpler to provide the known prefixes. This algorithm has been reverse-engineered and described at https://usn.pw/blog/gen/2015/06/09/filenames/, which is defunct but still available via https://web.archive.org/. These can be recomputed by running the following Perl script: -- snip -- use warnings; use strict; sub compute_short_name_hash ($) { my $checksum = 0; foreach (split('', $_[0])) { $checksum = ($checksum * 0x25 + ord($_)) & 0xffff; } $checksum = ($checksum * 314159269) & 0xffffffff; $checksum = 1 + (~$checksum & 0x7fffffff) if ($checksum & 0x80000000); $checksum -= (($checksum * 1152921497) >> 60) * 1000000007; return scalar reverse sprintf("%x", $checksum & 0xffff); } print compute_short_name_hash($ARGV[0]); -- snap -- E.g., running that with the argument ".gitignore" will result in "250a" (which then becomes "gi250a" in the code). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Jeff King <peff@peff.net>
2018-05-22is_hfs_dotgit: match other .git filesJeff King2-12/+51
Both verify_path() and fsck match ".git", ".GIT", and other variants specific to HFS+. Let's allow matching other special files like ".gitmodules", which we'll later use to enforce extra restrictions via verify_path() and fsck. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22is_ntfs_dotgit: use a size_t for traversing stringJeff King1-1/+1
We walk through the "name" string using an int, which can wrap to a negative value and cause us to read random memory before our array (e.g., by creating a tree with a name >2GB, since "int" is still 32 bits even on most 64-bit platforms). Worse, this is easy to trigger during the fsck_tree() check, which is supposed to be protecting us from malicious garbage. Note one bit of trickiness in the existing code: we sometimes assign -1 to "len" at the end of the loop, and then rely on the "len++" in the for-loop's increment to take it back to 0. This is still legal with a size_t, since assigning -1 will turn into SIZE_MAX, which then wraps around to 0 on increment. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22submodule-config: verify submodule names as pathsJeff King5-0/+143
Submodule "names" come from the untrusted .gitmodules file, but we blindly append them to $GIT_DIR/modules to create our on-disk repo paths. This means you can do bad things by putting "../" into the name (among other things). Let's sanity-check these names to avoid building a path that can be exploited. There are two main decisions: 1. What should the allowed syntax be? It's tempting to reuse verify_path(), since submodule names typically come from in-repo paths. But there are two reasons not to: a. It's technically more strict than what we need, as we really care only about breaking out of the $GIT_DIR/modules/ hierarchy. E.g., having a submodule named "foo/.git" isn't actually dangerous, and it's possible that somebody has manually given such a funny name. b. Since we'll eventually use this checking logic in fsck to prevent downstream repositories, it should be consistent across platforms. Because verify_path() relies on is_dir_sep(), it wouldn't block "foo\..\bar" on a non-Windows machine. 2. Where should we enforce it? These days most of the .gitmodules reads go through submodule-config.c, so I've put it there in the reading step. That should cover all of the C code. We also construct the name for "git submodule add" inside the git-submodule.sh script. This is probably not a big deal for security since the name is coming from the user anyway, but it would be polite to remind them if the name they pick is invalid (and we need to expose the name-checker to the shell anyway for our test scripts). This patch issues a warning when reading .gitmodules and just ignores the related config entry completely. This will generally end up producing a sensible error, as it works the same as a .gitmodules file which is missing a submodule entry (so "submodule update" will barf, but "git clone --recurse-submodules" will print an error but not abort the clone. There is one minor oddity, which is that we print the warning once per malformed config key (since that's how the config subsystem gives us the entries). So in the new test, for example, the user would see three warnings. That's OK, since the intent is that this case should never come up outside of malicious repositories (and then it might even benefit the user to see the message multiple times). Credit for finding this vulnerability and the proof of concept from which the test script was adapted goes to Etienne Stalmans. Signed-off-by: Jeff King <peff@peff.net>
2018-05-22submodule: add --dissociate option to add/update commandsCasey Fitzpatrick4-5/+48
Add --dissociate option to add and update commands, both clone helper commands that already have the --reference option --dissociate pairs with. Signed-off-by: Casey Fitzpatrick <kcghost@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22submodule: add --progress option to add commandCasey Fitzpatrick3-1/+27
The '--progress' was introduced in 72c5f88311d (clone: pass --progress decision to recursive submodules, 2016-09-22) to fix the progress reporting of the clone command. Also add the progress option to the 'submodule add' command. The update command already supports the progress flag, but it is not documented. Signed-off-by: Casey Fitzpatrick <kcghost@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22submodule: clean up substitutions in scriptCasey Fitzpatrick1-4/+4
'recommend_shallow' and 'jobs' variables do not need quotes. They only hold a single token value, and even if they were multi-token it is likely we would want them split at IFS rather than pass a single string. 'progress' is a boolean value. Treat it like the other boolean values in the script by using a substitution. Signed-off-by: Casey Fitzpatrick <kcghost@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22unpack_trees_options: free messages when doneMartin Ågren5-4/+26
The strings allocated in `setup_unpack_trees_porcelain()` are never freed. Provide a function `clear_unpack_trees_porcelain()` to do so and call it where we use `setup_unpack_trees_porcelain()`. The only non-trivial user is `unpack_trees_start()`, where we should place the new call in `unpack_trees_finish()`. We keep the string pointers in an array, mixing pointers to static memory and memory that we allocate on the heap. We also keep several copies of the individual pointers. So we need to make sure that we do not free what we must not free and that we do not double-free. Let a separate argv_array take ownership of all the strings we create so that we can easily free them. Zero the whole array of string pointers to make sure that we do not leave any dangling pointers. Note that we only take responsibility for the memory allocated in `setup_unpack_trees_porcelain()` and not any other members of the `struct unpack_trees_options`. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-22argv-array: return the pushed string from argv_push*()Junio C Hamano2-4/+6
Such an API change allows us to use an argv_array this way: struct argv_array to_free = ARGV_ARRAY_INIT; const char *msg; if (some condition) { msg = "constant string message"; ... other logic ... } else { msg = argv_array_pushf(&to_free, "format %s", var); } ... use "msg" ... ... do other things ... argv_array_clear(&to_free); Note that argv_array_pushl() and argv_array_pushv() are used to push one or more strings with a single call, so we do not return any one of these strings from these two functions in order to reduce the chance to misuse the API. Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21travis-ci: run gcc-8 on linux-gcc jobsNguyễn Thái Ngọc Duy2-0/+6
Switch from gcc-4.8 to gcc-8. Newer compilers come with more warning checks (usually in -Wextra). Since -Wextra is enabled in developer mode (which is also enabled in travis), this lets travis report more warnings before other people do it. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21regex: do not call `regfree()` if compilation failsMartin Ågren2-3/+0
It is apparently undefined behavior to call `regfree()` on a regex where `regcomp()` failed. The language in [1] is a bit muddy, at least to me, but the clearest hint is this (`preg` is the `regex_t *`): Upon successful completion, the regcomp() function shall return 0. Otherwise, it shall return an integer value indicating an error as described in <regex.h>, and the content of preg is undefined. Funnily enough, there is also the `regerror()` function which should be given a pointer to such a "failed" `regex_t` -- the content of which would supposedly be undefined -- and which may investigate it to come up with a detailed error message. In any case, the example in that document shows how `regfree()` is not called after `regcomp()` fails. We have quite a few users of this API and most get this right. These three users do not. Several implementations can handle this just fine [2] and these code paths supposedly have not wreaked havoc or we'd have heard about it. (These are all in code paths where git got bad input and is just about to die anyway.) But let's just avoid the issue altogether. [1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/regcomp.html [2] https://www.redhat.com/archives/libvir-list/2013-September/msg00262.html Researched-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-byi Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21config: let `config_store_data_clear()` handle `key`Martin Ågren1-7/+3
Instead of remembering to free `key` in each code path, let `config_store_data_clear()` handle that. We still need to free it before replacing it, though. Move that freeing closer to the replacing to be safe. Note that in that same part of the code, we can no longer set `key` to the original pointer, but need to `xstrdup()` it. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21config: let `config_store_data_clear()` handle `value_regex`Martin Ågren1-11/+6
Instead of duplicating the logic for clearing up `value_regex`, let `config_store_data_clear()` handle that. When `regcomp()` fails, the current code does not call `regfree()`. Make sure we do the same by immediately invalidating `value_regex`. Some implementations are able to handle such an extra `regfree()`-call [1], but from the example in [2], we should not do so. (The language itself in [2] is not super-clear on this.) [1] https://www.redhat.com/archives/libvir-list/2013-September/msg00262.html [2] http://pubs.opengroup.org/onlinepubs/9699919799/functions/regcomp.html Researched-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21config: free resources of `struct config_store_data`Martin Ågren1-0/+9
Commit fee8572c6d (config: avoid using the global variable `store`, 2018-04-09) dropped the staticness of a certain struct, instead letting the users create an instance on the stack and pass around a pointer. We do not free all the memory that the struct tracks. When the struct was static, the memory would always be reachable. Now that we keep the struct on the stack, though, as soon as we return, it goes out of scope and we leak the memory it points to. In particular, we leak the memory pointed to by the `parsed` and `seen` fields. Introduce and use a helper function `config_store_data_clear()` to plug these leaks. The memory tracked here is config parser events. Once the users (`git_config_set_multivar_in_file_gently()` and `git_config_copy_or_rename_section_in_file()` at the moment) are done, no-one should be holding on to a pointer into this memory. There are two more members of the struct that are candidates for freeing in this new function (`key` and `value_regex`). Those are actually already being taken care of. The next couple of patches will move their freeing into the function we are adding here. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t5300: abstract away SHA-1-specific constantsbrian m. carlson1-3/+5
Adjust the test so that it computes variables for object IDs instead of using hard-coded hashes. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4208: abstract away SHA-1-specific constantsbrian m. carlson1-1/+2
Adjust the test so that it computes variables for object IDs instead of using hard-coded hashes. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4045: abstract away SHA-1-specific constantsbrian m. carlson1-2/+4
Adjust the test so that it computes variables for blobs instead of using hard-coded hashes. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4042: abstract away SHA-1-specific constantsbrian m. carlson1-6/+10
Adjust the test so that it computes variables for blobs instead of using hard-coded hashes. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4205: sort log output in a hash-independent waybrian m. carlson1-4/+4
This test enumerates log entries and then sorts them. For SHA-1, this produces results that happen to sort in the order specified in the test, but for other hash algorithms they sort differently. Ensure we sort the log entries in a hash-independent way by sorting on the ref name instead of the object ID. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t/lib-diff-alternative: abstract away SHA-1-specific constantsbrian m. carlson1-4/+8
Adjust the test code so that it computes variables for blobs instead of using hard-coded hashes. This makes t4033 and t4050 (the patience and histogram tests) pass. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4030: abstract away SHA-1-specific constantsbrian m. carlson1-2/+3
Adjust the test so that it computes variables for blobs instead of using hard-coded hashes. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4029: abstract away SHA-1-specific constantsbrian m. carlson1-1/+6
Adjust the test so that it computes variables for blobs instead of using hard-coded hashes. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-21t4029: fix test indentationbrian m. carlson1-17/+16
We typically indent our tests with a single tab, partially so that we can take advantage of indented heredocs. Make this change and move the quote marks to be in the typical position for our tests. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>