Age | Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Workaround for older macOS ld.
* tb/macos-false-but-the-compiler-does-not-know-it-fix:
intialize false_but_the_compiler_does_not_know_it_
|
|
Test fix.
* jc/t6011-mv-ro-fix:
t6011: fix misconversion from perl to sed
|
|
Meson-based build framework update.
* dd/meson-perl-custom-path:
meson: allow customize perl installation path
|
|
Make repository clean-up tasks "gc" can do available to "git
maintenance" front-end.
* ps/maintenance-missing-tasks:
builtin/maintenance: introduce "rerere-gc" task
builtin/gc: move rerere garbage collection into separate function
builtin/maintenance: introduce "worktree-prune" task
builtin/gc: move pruning of worktrees into a separate function
builtin/gc: remove global variables where it is trivial to do
builtin/gc: fix indentation of `cmd_gc()` parameters
|
|
The fallback implementation of open_nofollow() depended on
open("symlink", O_NOFOLLOW) to set errno to ELOOP, but a few BSD
derived systems use different errno, which has been worked around.
* cf/wrapper-bsd-eloop:
wrapper: NetBSD gives EFTYPE and FreeBSD gives EMFILE where POSIX uses ELOOP
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git add 'f?o'" did not add 'foo' if 'f?o', an unusual pathname,
also existed on the working tree, which has been corrected.
* kj/glob-path-with-special-char:
dir.c: literal match with wildcard in pathspec should still glob
|
|
Docfixes.
* kh/docfixes:
doc: branch: fix inline-verbatim
doc: reflog: fix `drop` subheading
|
|
Code clean-up around stale CI elements and building with Visual Studio.
* js/ci-buildsystems-cleanup:
config.mak.uname: drop the `vcxproj` target
contrib/buildsystems: drop support for building . vcproj/.vcxproj files
ci: stop linking the `prove` cache
|
|
Test result aggregation did not work in Meson based CI jobs.
* ps/ci-test-aggreg-fix-for-meson:
ci: fix aggregation of test results with Meson
|
|
Doc update.
* en/get-tree-entry-doc:
tree-walk.h: fix incorrect API comment
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Meson-based build framework update.
* ps/meson-bin-sh:
meson: prefer shell at "/bin/sh"
meson: report detected runtime executable paths
|
|
"git diff --minimal" used to give non-minimal output when its
optimization kicked in, which has been disabled.
* ng/xdiff-truly-minimal:
xdiff: disable cleanup_records heuristic with --minimal
|
|
"git index-pack --fix-thin" used to abort to prevent a cycle in
delta chains from forming in a corner case even when there is no
such cycle.
* ds/fix-thin-fix:
index-pack: allow revisiting REF_DELTA chains
t5309: create failing test for 'git index-pack'
test-tool: add pack-deltas helper
|
|
Further refinement on CI messages when an optional external
software is unavailable (e.g. due to third-party service outage).
* jc/ci-skip-unavailable-external-software:
ci: download JGit from maven, not eclipse.org
ci: update the message for unavailble third-party software
|
|
Further code clean-up in the object-store layer.
* ps/object-store-cleanup:
object-store: drop `repo_has_object_file()`
treewide: convert users of `repo_has_object_file()` to `has_object()`
object-store: allow fetching objects via `has_object()`
object-store: move function declarations to their respective subsystems
object-store: move and rename `odb_pack_keep()`
object-store: drop `loose_object_path()`
object-store: move `struct packed_git` into "packfile.h"
|
|
Update send-email to work better with Outlook's smtp server.
* ag/send-email-outlook:
send-email: add --[no-]outlook-id-fix option
send-email: retrieve Message-ID from outlook SMTP server
|
|
* 'master' of https://github.com/j6t/gitk:
gitk: add Tamil translation
gitk: limit PATH search to bare executable names
gitk: _search_exe is no longer needed
gitk: override $PATH search only on Windows
gitk: adjust indentation to match the style used in this script
|
|
* 'master' of https://github.com/j6t/git-gui:
git-gui: treat the message template file as a built file
git-gui: heed core.commentChar/commentString
git-gui: po/README: update repository location and maintainer
|
|
* js/po-update-workflow:
git-gui: treat the message template file as a built file
git-gui: po/README: update repository location and maintainer
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
|
|
* at/translation-tamil:
gitk: add Tamil translation
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Work around false positive given by CodeQL.
* js/diff-codeql-false-positive-workaround:
diff: check range before dereferencing an array element
|
|
"git mv a a/b dst" would ask to move the directory 'a' itself, as
well as its contents, in a single destination directory, which is
a contradicting request that is impossible to satisfy. This case is
now detected and the command errors out.
* ps/mv-contradiction-fix:
builtin/mv: convert assert(3p) into `BUG()`
builtin/mv: bail out when trying to move child and its parent
|
|
hashmap API clean-up to ensure hashmap_clear() leaves a cleared map
in a reusable state.
* en/hashmap-clear-fix:
hashmap: ensure hashmaps are reusable after hashmap_clear()
|
|
Some distros, notably Fedora, want to install non-core Perl libraries
into specific directory, namely /usr/share/perl5/vendor_perl.
The Makefile build system allows this by overriding perllibdir variable,
let's make meson works on par with our Makefile.
Signed-off-by: Đoàn Trần Công Danh <congdanhqx@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Follow the lead of 5377abc0c9d5 ("po/git.pot: don't check in result
of "make pot"", 2022-05-26) in the Git repository and do not track
git-gui.pot anymore.
Instead, translators are expected to integrate an up-to-date version
from the master branch into their translation file using
make ALL_POFILES=po/xx.po update-po
Update README to describe the new process. It is now understood that
different translations need not be based on the same message template
file, but rather individual translators should base their translation
on the most up-to-date code. Remove the section that addresses the
i18n coordinator as it does not apply when no common base is required
among translators.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
|
|
While git-gc(1) knows to garbage collect the rerere cache,
git-maintenance(1) does not yet have a task for this cleanup. Introduce
a new "rerere-gc" task to plug this gap.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In a subsequent commit we are going to introduce a new "rerere-gc" task
for git-maintenance(1). To prepare for this, refactor the code that
spawns `git rerere gc` into a separate function.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While git-gc(1) knows to prune stale worktrees, git-maintenance(1) does
not yet have a task for this cleanup. Introduce a new "worktree-prune"
task to plug this gap.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In a subsequent commit we will introduce a new "worktree-prune" task for
git-maintenance(1). To prepare for this, refactor the code that spawns
`git worktree prune` into a separate function.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We use a couple of global variables to assemble command line arguments
for subprocesses we execute in git-gc(1). All of these variables except
the one for git-repack(1) are only used in a single place though, so
they don't really add anything but confusion.
Remove those variables.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The parameters of `cmd_gc()` aren't indented properly. Fix this.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Compiling/linking 82e79c63642c on an older MacOs machine (like Xcode
14.3.1, the last version of 14.x series) leads to this:
Undefined symbols for architecture x86_64:
"_false_but_the_compiler_does_not_know_it_", referenced from:
_start_command in libgit.a(run-command.o)
The linker fails to pick up compiler-tricks/not-constant.o that
defines the needed false_but_the_compiler_does_not_know_it_ symbol,
which is the only thing defined in that object file, from the
libgit.a archive.
Initializing the variable explicitly to 0 works around the linker
bug; the symbol type changes from 'C' to 'S' and is picked up by the
linker.
Xcode 15 introduces a new linker, which seems to fix the bug, making
the workaround here unnecessary, and Apple requires to build with
Xcode 16 or later in order to upload to their App Store Connect
since April 24, 2025, but not everybody is expected to upgrade their
toolchain immediately.
Helped-by: Koji Nakamaru <koji.nakamaru@gree.net>
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: தமிழ் நேரம் <anishprabu.t@gmail.com>
|
|
No, this is not about a quiz on regexp compatibility between Perl
and sed.
Back when cdbdc6bf (t: refactor tests depending on Perl substitution
operator, 2025-04-03) rewrote many uses of perl with sed, the general
pattern of the original scripts were
chmod +w some_read_only_file &&
perl -p -e "regexp to munge" some_read_only_file >some_tmp &&
mv some_tmp some_read_only_file
persumably because the author knew that replacing some_read_only_file
with "mv" at the last step would not work without "mv -f" in some
environments (GNU seems to succeed without giving any prompt when
not running interactively, which is what happens when running t/
scripts). Replacing perl with sed would be fine as long as sed with
updated regexp does the equivalent munging.
But one place used to use a different construct in the original:
perl -i.bak -p -e "regexp to munge" some_read_only_file
With _no_ temporary file or "mv", "perl -i" allows you to replace a
read-only file in place.
When we replaced the use of "perl" with "sed" in the said commit,
however, because "sed -i" is not portable, we rewrote that in-place
replacement to
sed "regexp to munge" some_read_only_file >some_tmp &&
mv some_tmp some_read_only_file
Again, unfortunately that does not work in some environment, without
"mv -f".
We could run "mv -f" here, but we would then need to remove "chmod
+w" and have them use "mv -f" instead at all places that were
touched cdbdc6bf (t: refactor tests depending on Perl substitution
operator, 2025-04-03) to be consistent (and more concise).
For now, let's make it consistent in the other direction by mimick
the other places that made the target read-write before moving.
Speaking of portability, the outcome of using "sed" on non-text
files is unspecified, so the entire exercise of cdbdc6bf may have
needed to be reverted if people still used ancient version of
"standard compliant" sed that barfs on non-text files, but these
days we may be able to get away with "BSDs and GNU seem OK with it"
;-) But one fix at a time.
Reported-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As documented on NetBSD's man page, open with the O_NOFOLLOW flag and a
symlink returns -1 and sets errno to EFTYPE which differs from POSIX.
This patch fixes the following test failure:
$ sh t0602-reffiles-fsck.sh --verbose
--- expect 2025-05-02 23:05:23.920890147 +0000
+++ err 2025-05-02 23:05:23.916794959 +0000
@@ -1 +1 @@
-error: packed-refs: badRefFiletype: not a regular file but a symlink
+error: unable to open '.git/packed-refs': Inappropriate file type or format
not ok 12 - the filetype of packed-refs should be checked
FreeBSD has the same issue for EMLINK instead of EFTYPE.
This portability issue was introduced in cfea2f2da8 (packed-backend:
check whether the "packed-refs" is regular file, 2025-02-28)
Signed-off-by: Collin Funk <collin.funk1@gmail.com>
Acked-by: brian m. carlson <sandals@crustytoothpaste.net>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Add an equivalent to "make hdr-check" target to meson based builds.
* kn/meson-hdr-check:
makefile/meson: add 'check-headers' as alias for 'hdr-check'
meson: add support for 'hdr-check'
meson: rename 'third_party_sources' to 'third_party_excludes'
meson: move headers definition from 'contrib/coccinelle'
coccinelle: meson: rename variables to be more specific
ci/github: install git before checking out the repository
|
|
Code clean-up for meson-based build infrastructure.
* es/meson-cleanup:
meson: only check for missing networking syms on non-Windows; add compat impls
meson: fix typo in function check that prevented checking for hstrerror
meson: add a couple missing networking dependencies
meson: do a full usage-based compile check for sysinfo
meson: check for getpagesize before using it
meson: simplify and parameterize various standard function checks
|
|
The build procedure based on Meson learned to drive the
benchmarking tests.
* ps/meson-build-perf-bench:
meson: wire up benchmarking options
meson: wire up benchmarks
t/perf: fix benchmarks with out-of-tree builds
t/perf: use configured PERL_PATH
t/perf: fix benchmarks with alternate repo formats
|
|
Update to arm64 Windows port.
* js/windows-arm64:
max_tree_depth: lower it for clangarm64 on Windows
mingw(arm64): do move the `/etc/git*` location
msvc: do handle builds on Windows/ARM64
mingw: do not use nedmalloc on Windows/ARM64
config.mak.uname: add support for clangarm64
bswap.h: add support for built-in bswap functions
|
|
Our CI needs to be aware of the location of the test output directory so
that it knows where to find test results. Some of our CI jobs achieve
this by setting the `TEST_OUTPUT_DIRECTORY` environment variable, which
ensures that the output will be written to that directory. Other jobs,
especially on GitHub Workflows, don't set that environment variable and
instead expect test results to be located in the source directory in
"t/".
The latter logic does not work with Meson though, as the test results
are not written into the source directory by default, but instead into
the build directory. As such, any job that uses Meson without setting
the environment variable will be unable to locate and aggregate results.
Fix this by explicitly setting the test output directory when we set up
the Meson build directory. Like this, we can easily default to "t/" in
the source directory when the value hasn't been set explicitly.
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Now that we dropped `contrib/buildsystems/generate` to generate Visual
Studio Solution files, it is time to also drop the `vcxproj` Makefile
target that depended on that script.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Before we had CMake support, the only way to build Git in Visual Studio
was via this hacky `generate` script.
For a while I tried to fix whenever things got broken, in particular to
allow building confidence in embargoed releases by running the CI builds
in Azure Pipelines in a private Azure DevOps project. I even carried the
patches in Git for Windows with the intention of upstreaming them,
eventually.
However, it is a lot of work with too little benefit. CMake is much
better supported by Visual Studio. So let's drop this hacky script (plus
support code).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It is not useful because we do not have any persisted directory anymore,
not since dropping our Travis CI support.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
7b399322a2e (doc: apply new format to git-branch man page, 2025-03-19)
updated the formatting for this doc to, among other things, use backtick
for some elements. In the process `è` was used by accident instead
of backtick.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The tilde (~) count doesn’t match the length of the heading. In turn
you get a bunch of `<sub>~</sub>` instead of the intended `<h3>` in the
HTML output.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When a path with wildcard characters, e.g. 'f*o', exists in the
working tree, "git add -- 'f*o'" stops after happily finding
that there is 'f*o' and adding it to the index, without
realizing there may be other paths, e.g. 'foooo', that may match
the given pathspec.
This is because dir.c:do_match_pathspec() disables further
matches with pathspec when it finds an exact match.
Reported-by: piotrsiupa <piotrsiupa@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: K Jayatheerth <jayatheerthkulkarni2005@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When commit 50ddb089ff68 (tree-walk.c: remove the_repo from
get_tree_entry(), 2019-06-27) added an extra parameter to
get_tree_entry(), it did not fix the ordering comment about the meaning
of the parameters. Rather than just changing "third"->"fourth" and
"fourth"->"fifth", give the paramemters meaningful names (or actually,
just take the existing names from the get_tree_entry() definition in the
tree-walk.c file) and while at it, tweak the rest of the description to
incorporate the other parameter names as well.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The use of asserts is discouraged in our codebase because they lead to
different behaviour depending on how Git is built. When being unsure
enough whether a condition always holds so that one adds the assert,
then the assert should probably trigger regardless of how Git is being
built.
Drop the call to assert(3p) in git-mv(1) and instead use `BUG()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We have a known issue in git-mv(1) where moving both a child and any of
its parents causes an assert to trigger because the child cannot be
found anymore in the index. We have added a test for this in commit
0fcd473fdd3 (t7001: add failure test which triggers assertion,
2024-10-22) without addressing the issue, which is why the test itself
is marked as `test_expect_failure`.
The behaviour of that test relies on a call to assert(3p) though, which
may or may not be compiled into the resulting binary depending on
whether or not we pass `-DNDEBUG`. When these asserts are compiled into
Git this may cause our CI to hang on Windows though, because asserts may
cause a modal window to be shown.
While we could work around the issue by converting this into a call to
`BUG()`, let's rather address the root cause of the issue by bailing out
in case we see that both a child and any of its parents are being moved
in the same command.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
CI fix.
* ps/ci-resurrect-p4-on-github:
ci: fix p4d executable not being found on GitHub Actions
|
|
Build update to install bash (but not zsh) completion script.
* ps/install-bash-completion:
contrib/completion: install Bash completion
|
|
A test fix.
* jk/p5332-testfix:
p5332: drop "+" from --stdin-packs input
|
|
Code clean-up.
* lo/remove-log-reencode-from-rev-info:
revision: remove log_reencode field from rev_info
|
|
Reduce requirement for Perl in our documentation build and a few
scripts.
* ps/fewer-perl:
Documentation: stop depending on Perl to generate command list
Documentation: stop depending on Perl to massage user manual
request-pull: stop depending on Perl
filter-branch: stop depending on Perl
|
|
Overhaul of the reftable API.
* ps/reftable-api-revamp:
reftable/table: move printing logic into test helper
reftable/constants: make block types part of the public interface
reftable/table: introduce iterator for table blocks
reftable/table: add `reftable_table` to the public interface
reftable/block: expose a generic iterator over reftable records
reftable/block: make block iterators reseekable
reftable/block: store block pointer in the block iterator
reftable/block: create public interface for reading blocks
git-zlib: use `struct z_stream_s` instead of typedef
reftable/block: rename `block_reader` to `reftable_block`
reftable/block: rename `block` to `block_data`
reftable/table: move reading block into block reader
reftable/block: simplify how we track restart points
reftable/blocksource: consolidate code into a single file
reftable/reader: rename data structure to "table"
reftable: fix formatting of the license header
|
|
Fix for scheduled maintenance tasks on platforms using launchctl.
* jh/gc-launchctl-schedule-fix:
maintenance: fix launchctl calendar intervals
|
|
Code clean-up.
* az/tighten-string-array-constness:
global: mark usage strings and string tables const
|
|
Typofix.
* as/typofix-in-env-h-header:
environment: fix typo: 'setup_git_directory_gently'
|
|
Since a call to repo_config() can be called with repo set to NULL
these days, a command that is marked as RUN_SETUP in the builtin
command table does not have to check repo with NULL before making
the call.
* ua/call-repo-config-with-possibly-null-repository:
builtin/difftool: remove unnecessary if statement
builtin/add: remove unnecessary if statement
|
|
Developer support fix..
* js/git-perf-env-override:
perf: do allow `GIT_PERF_*` to be overridden again
|
|
The cleanup_records function marks some lines as changed before running
the actual diff algorithm. For most lines, this is a good performance
optimization, but it also marks lines that are surrounded by many
changed lines as changed as well. This can cause redundant changes and
longer-than-necessary diffs.
Whether this results in better-looking diffs is subjective. However, the
--minimal flag explicitly requests the shortest possible diff.
The change results in shorter diffs in about 1.3% of all diffs in Git's
history. Performance wise, I have measured the impact on
"git log -p -3000 --minimal > /dev/null". With this change, I get
Time (mean ± σ): 2.363 s ± 0.023 s (25 runs)
and without this patch I measured
Time (mean ± σ): 2.362 s ± 0.035 s (25 runs).
As the difference is well within the margin of error, this does not seem
to have an impact on performance.
Signed-off-by: Niels Glodny <n.glodny@campus.lmu.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Before accessing an array element at a given index, it should be
verified that the index is within the desired bounds, not afterwards,
otherwise it may not make sense to even access the array element in the
first place. This is the point of CodeQL's
`cpp/offset-use-before-range-check` rule.
This CodeQL rule unfortunately is also triggered by the
`fill_es_indent_data()` code, even though the condition `off < len - 1`
does not even need to guarantee that the offset is in bounds (`s` points
to a NUL-terminated string, for which `s[off] == '\r'` would fail before
running out of bounds).
Let's work around this rare false positive to help us use an otherwise
mostly useful tool is a worthy thing to do.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the preceding commits we have converted all users of
`repo_has_object_file()` and its `_with_flags()` variant to instead use
`has_object()`. Drop these functions.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As the comment of `repo_has_object_file()` and its `_with_flags()`
variant tells us, these functions are considered to be deprecated in
favor of `has_object()`. There are a couple of slight benefits in favor
of the replacement:
- The new function has a short-and-sweet name.
- More explicit defaults: `has_object()` doesn't fetch missing objects
via promisor remotes, and neither does it reload packfiles if an
object wasn't found by default. This ensures that it becomes
immediately obvious when a simple object existence check may result
in expensive actions.
Most importantly though, it is confusing that we have two sets of
functions that ultimately do the same thing, but with different
defaults.
Start sunsetting `repo_has_object_file()` and its `_with_flags()`
sibling by replacing all callsites with `has_object()`:
- `repo_has_object_file(...)` is equivalent to
`has_object(..., HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT)`
is equivalent to `has_object(..., 0)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_SKIP_FETCH_OBJECT)`
is equivalent to `has_object(..., HAS_OBJECT_RECHECK_PACKED)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_QUICK)`
is equivalent to `has_object(..., HAS_OBJECT_FETCH_PROMISOR)`.
The replacements should be functionally equivalent.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We're about to fully remove `repo_has_object_file()` in favor of
`has_object()`. The latter function does not yet have a way to fetch
missing objects via a promisor remote though, which means that it cannot
fully replace all usecases of `repo_has_object_file()`.
Introduce a new flag `HAS_OBJECT_FETCH_PROMISOR` that causes the
function to optionally fetch missing objects which are part of a
promisor pack. This flag will be used in the subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We carry declarations for a couple of functions in "object-store.h" that
are not defined in "object-store.c", but in a different subsystem. Move
these declarations to the respective headers whose matching code files
carry the corresponding definition.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The function `odb_pack_keep()` creates a file at the passed-in path. If
this fails, then the function re-tries by first creating any potentially
missing leading directories and then trying to create the file once
again. As such, this function doesn't host any kind of logic that is
specific to the object store, but is rather a generic helper function.
Rename the function to `safe_create_file_with_leading_directories()` and
move it into "path.c". While at it, refactor it so that it loses its
dependency on `the_repository`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The function `loose_object_path()` is a trivial wrapper around
`odb_loose_path()`, with the only exception that it always uses the
primary object database of the given repository. This doesn't really add
a ton of value though, so let's drop the function and inline it at every
callsite.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The "object-store.h" header contains the definition of `struct
packed_git`. As this structure hosts all kind of information about a
specific packfile it is arguably a bit out of place in a generic place
like "object-store.h".
Move the structure as well as `pack_map_entry_cmp()` into "packfile.h".
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Add an option to allow users to specifically enable or disable
retrieving the Message-ID from the Outlook SMTP server. This can be used
for other hosts mimicking the behaviour of Outlook, or for users who set
a custom domain to be a CNAME for the Outlook SMTP server.
While at it, lets also add missing * in description of --no-smtp-auth.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Aditya Garg <gargaditya08@live.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the series merged at bf0a430f70b5 (Merge branch 'en/strmap',
2020-11-21), strmap was built on top of hashmap and hashmap was extended
in a few ways to support strmap and be more generally useful. One of
the extensions was that hashmap_partial_clear() was introduced to allow
reuse of the hashmap without freeing the table. Peff believed that it
also made sense to introduce a hashmap_clear() which freed everything
while allowing reuse.
I added hashmap_clear(), but in doing so, overlooked the fact that for
a hashmap to be reusable, it needs a defined cmpfn and data (the
HASHMAP_INIT macro requires these fields as parameters, for example).
So, if we want the hashmap to be reusable, we shouldn't zero out those
fields. We probably also shouldn't zero out do_count_items. (We could
zero out grow_at and shrink_at, but whether we zero those or not is
irrelevant as they'll be automatically updated whenever a new entry is
inserted.)
Since clearing is associated with freeing map->table, and the only thing
required for consistency after freeing map->table is zeroing tablesize
and private_size, let's only zero those fields out.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As detailed in the previous changes to t5309-pack-delta-cycles.sh, the
logic within 'git index-pack' to analyze an incoming thin packfile with
REF_DELTAs is suspect. The algorithm is overly cautious around delta
cycles, and that leads in fact to failing even when there is no cycle.
This change adjusts the algorithm to no longer fail in these cases. In
fact, these cycle cases will no longer fail but more importantly the
valid cases will no longer fail, either. The resulting packfile from the
--fix-thin operation will not have cycles either since REF_DELTAs are
forbidden from the on-disk format and OFS_DELTAs are impossible to write
as a cycle.
The crux of the matter is how the algorithm works when the REF_DELTAs
point to base objects that exist in the local repository. When reading
the thin packfile, the object IDs for the delta objects are unknown so
we do not have the delta chain structure automatically. Instead, we need
to start somewhere by selecting a delta whose base is inside our current
object database.
Consider the case where the packfile has two REF_DELTA objects, A and B,
and the delta chain looks like "A depends on B" and "B depends on C" for
some third object C, where C is already in the current repository. The
algorithm _should_ start with all objects that depend on C, finding B,
and then moving on to all objects depending on B, finding A.
However, if the repository also already has object B, then the delta
chain can be analyzed in a different order. The deltas with base B can
be analyzed first, finding A, and then the deltas with base C are
analyzed, finding B. The algorithm currently continues to look for
objects that depend on B, finding A again. This fails due to A's
'real_type' member already being overwritten from OBJ_REF_DELTA to the
correct object type.
This scenario is possible in a typical 'git fetch' where the client does
not advertise B as a 'have' but requests A as a 'want' (and C is noticed
as a common object based on other 'have's). The reason this isn't
typically seen is that most Git servers use OFS_DELTAs to represent
deltas within a packfile. However, if a server uses only REF_DELTAs,
then this kind of issue can occur. There is nothing in the explicit
packfile format that states this use of inter-pack REF_DELTA is
incorrect, only that REF_DELTAs should not be used in the on-disk
representation to avoid cycles.
This die() was introduced in ab791dd138 (index-pack: fix race condition
with duplicate bases, 2014-08-29). Several refactors have adjusted the
error message and the surrounding logic, but this issue has existed for
a longer time as that was only a conversion from an assert().
The tests in t5309 originated in 3b910d0c5e (add tests for indexing
packs with delta cycles, 2013-08-23) and b2ef3d9ebb (test index-pack on
packs with recoverable delta cycles, 2013-08-23). These changes make
note that the current behavior of handling "resolvable" cycles is mostly
a documentation-only test, not that this behavior is the best way for
Git to handle the situation.
The fix here is somewhat complicated due to the amount of state being
adjusted by the loop within threaded_second_pass(). Instead of trying to
resume the start of the loop while adjusting the necessary context, I
chose to scan the REF_DELTAs depending on the current 'parent' and skip
any that have already been processed. This necessarily leaves us in a
state where 'child' and 'child_obj' could be left as NULL and that must
be handled later. There is also some careful handling around skipping
REF_DELTAs when there are also OFS_DELTAs depending on that parent.
There may be value in extending 'test-tool pack-deltas' to allow writing
OFS_DELTAs in order to exercise this logic across the delta types.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This new test demonstrates some behavior where a valid packfile is being
rejected by the Git client due to the order in which it is resolving
REF_DELTAs.
The thin packfile has a REF_DELTA chain A->B->C where C is not included
in the packfile. However, the client repository contains both C and B
already. Thus, 'git index-pack' is able to resolve A before resolving B.
When resolving B, it then attempts to resolve any other REF_DELTAs that
are pointing to B as a base. This "revisits" A and complains as if there
is a cycle, but it did not actually detect a cycle.
A fix will arrive in the next change.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When trying to demonstrate certain behavior in tests, it can be helpful
to create packfiles that have specific delta structures. 'git
pack-objects' uses various algorithms to select deltas based on their
compression rates, but that does not always demonstrate all possible
packfile shapes. This becomes especially important when wanting to test
'git index-pack' and its ability to parse certain pack shapes.
We have prior art in t/lib-pack.sh, where certain delta structures are
produced by manually writing certain opaque pack contents. However,
producing these script updates is cumbersome and difficult to do as a
contributor.
Instead, create a new test-tool, 'test-tool pack-deltas', that reads a
list of instructions for which objects to include in a packfile and how
those objects should be written in delta form.
At the moment, this only supports REF_DELTAs as those are the kinds of
deltas needed to exercise a bug in 'git index-pack'.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Wire up a couple of benchmarking options that we end up writing into our
"GIT-BUILD-OPTIONS" file. These options allow users to control how
exactly benchmarks are executed.
Note that neither `GIT_PERF_MAKE_COMMAND` nor `GIT_PERF_MAKE_OPTS` are
exposed as a build option. Those options are used by "t/perf/run", which
is not used by Meson.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Wire up benchmarks in Meson. The setup is mostly the same as how we wire
up our tests. The only difference is that benchmarks get wired up via
the `benchmark()` option instead of via `test()`, which gives them a bit
of special treatment:
- Benchmarks never run in parallel.
- Benchmarks aren't run by default when tests are executed.
- Meson does not inject the `MALLOC_PERTURB` environment variable.
Using benchmarks is quite simple:
```
$ meson setup build
# Run all benchmarks.
$ meson test -C build --benchmark
# Run a specific benchmark.
$ meson test -C build --benchmark p0000-*
```
Other than that the usual command line arguments accepted when running
tests are also accepted when running benchmarks.
Note that the benchmarking target is somewhat limited because it will
only run benchmarks for the current build. Other use cases, like running
benchmarks against multiple different versions of Git, are not currently
supported. Users should continue to use "t/perf/run" for those use
cases. The script should get extended at one point in time to support
Meson, but this is outside of the scope of this series.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The "perf-lib.sh" script is sourced by all of our benchmarking suites to
make available common infrastructure. The script assumes that build and
source directory are the same, which works for our Makefile. But the
assumption breaks with both CMake and Meson, where the build directory
can be located in an arbitrary place.
Adapt the script so that it works with out-of-tree builds. Most
importantly, this requires us to figure out the location of the build
directory:
- When running benchmarks via our Makefile the build directory is the
same as the source directory. We already know to derive the test
directory ("t/") via `$(pwd)/..`, which works because we chdir into
"t/perf" before executing benchmarks. We can thus derive the build
directory by appending another "/.." to that path.
- When running benchmarks via Meson the build directory is located at
an arbitrary location. The build system thus has to make the path
known by exporting the `GIT_BUILD_DIR` environment variable.
This change prepares us for wiring up benchmarks in Meson.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Our benchmarks use a couple of Perl scripts to compute results. These
Perl scripts get executed directly, and as the shebang is hardcoded to
"/usr/bin/perl" this will fail on any system where the Perl interpreter
is located in a different path.
Our build infrastructure already lets users configure the location of
Perl, which ultimately gets written into the GIT-BUILD-OPTIONS file.
This file is being sourced by "test-lib.sh", and consequently we already
have the "PERL_PATH" variable available that contains its configured
location.
Use "PERL_PATH" to execute Perl scripts, which makes them work on more
esoteric systems like NixOS. Furthermore, adapt the shebang to use
env(1) to execute Perl so that users who have Perl in PATH, but in a
non-standard location can execute the script directly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Many of our benchmarks operate on a user-defined repository that we copy
over before running the benchmarked logic. To keep unintentional side
effects caused by on-disk state at bay we skip copying some files. This
includes for example hooks, but also the repo's configuration.
It is quite sensible to not copy over the configuration, as it is quite
easy to inadvertently carry over configuration that may significantly
impact the performance measurements. But we cannot fully ignore the
configuration either, as it may contain information about the repository
format. This will cause failures when for example using a repository
with SHA256 object format or the reftable ref format.
Fix the issue by parsing the reference and object formats from the
source repository and passing them to git-init(1).
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The script generates a Message-ID alongwith the other headers when
gen_header is called, and is sent alongwith the email. For most email
providers, including gmail, the Message-ID goes unchanged to the
recipient.
But, this does not seem to be a case with Outlook. In Outlook, when we
send our own Message-ID as a part of the headers, it discards it. Then
it generates a new random Message-ID and that is what the recipient
gets.
This is a problem because the Message-ID is crucial when we are sending
multiple emails in a thread. The current implementation for threads in
the script replies to the Message-ID it generated, but due to Outlook's
behavior, it is not the same as the one that the recipient got, thus
breaking threads. So a need arises to retrieve the Message-ID from the
server response and set it in the In-Reply-To and References email
headers instead of using the self generated one for the purpose of
replies.
The $smtp->message variable in this script for outlook is something like
this:
2.0.0 OK <Message-ID> [Hostname=Some-hostname]
The Message-ID here is the one the recipient gets, rather than the one
the script generated.
This patch uses the fact above and retrieves the Message-ID from the
server response. It then changes the value of the $message_id variable
to the one received from the server. This value will be used when next
and subsequent messages are sent as replies to the message, thus
preserving the threading of the messages.
Signed-off-by: Aditya Garg <gargaditya08@live.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Meson detects the path of the target shell via `find_program("sh")`,
which essentially does a lookup via `PATH`. This may easily lead to a
subtly-broken Git distribution when the build host has its shell in a
location that the target host doesn't know about.
Fix the issue by appending "/bin" to the custom program path, which
causes us to prefer "/bin/sh" over a `PATH`-based lookup. While
"/bin/sh" isn't standardized, this path tends to work alright on Linux
and BSD distributions. Furthermore, "/bin/sh" is also the path we pick
in our Makefile by default, which further demonstrates that this shell
fulfills our needs.
Note that we intentionally append, not prepend, to the custom program
path. This is because the program path can be configured by the user via
the `-Dsane_tool_path=` build option, which should take precedence over
any defaults we pick for the user.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git needs to know about a couple of executable paths to pick at runtime.
This includes the system shell, but may also optionally include the Perl
and Python interpreters. Meson detects the location of these paths
automatically via `find_program()`, which does a lookup via the `PATH`
environment variable. As such, it may not be immediately obvious to the
developer which paths have been autodetected.
Improve this by exposing runtime executable paths at setup time.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
These are added in the Makefile, but not in meson. They probably won't
work well on systems without them.
CMake adds them, but only on non-Windows. Actually, it only performs
compiler checks for hstrerror, but excludes that check on Windows with
the note that it is "incompatible with the Windows build". This seems to
be misleading -- it is not incompatible, it simply doesn't exist. Still,
the compat version should not be used.
I interpret this cmake logic to mean we shouldn't even be checking for
symbol availability on Windows. In addition to making it simple to add
compat definitions, this also probably shaves off a second or two of
configure time on Windows as no compiler check needs to be performed.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Nowhere in the codebase do we otherwise check for strerror. Nowhere in
the codebase do we make use of -DNO_STRERROR. `strerror` is not a
networking function at all.
We do utilize `hstrerror` though, which is a networking function we
should have been checking here.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As evidenced in config.mak.uname and configure.ac, there are various
possible scenarios where these libraries are default-enabled in the
build, which mainly boils down to: SunOS. -lresolv is simply not the
only library that, when it exists, probably needs to be linked to for
networking.
Check for and add -lnsl -lsocket as well.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
On Solaris, sys/sysinfo.h is a completely different file and doesn't
resemble the linux file at all. There is also a sysinfo() function, but
it takes a totally different call signature, which asks for:
- the field you wish to receive
- a `char *buf` to copy the data to
and is very useful IFF you want to know, say, the hardware provider. Or,
get *specific* fields from uname(2).
https://docs.oracle.com/cd/E86824_01/html/E54765/sysinfo-2.html
It is surely possible to do this manually via `sysconf(3)` without the
nice API. I can't find anything more direct. Either way, I'm not very
attached to Solaris, so someone who cares can add it. Either way, it's
wrong to assume that sysinfo.h contains what we are looking for.
Check that sysinfo.h defines the struct we actually utilize in
builtins/gc.c, which will correctly fail on systems that don't have it.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It is deprecated and removed in SUS v3 / POSIX 2001, so various systems
may not include it. Solaris, in particular, carefully refrains from
defining it except inside of a maze of `#ifdef` to make sure you have
kept your nose clean and only used it in code that *targets* SUS v2 or
earlier.
config.mak.uname defines this automatically, though only for QNX.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This is repetitive logic. We either want to use some -lc function, or if
it is not available we define it as -DNO_XXX and usually (but not
always) provide some custom compatibility impl instead.
Checking the intent of each block when reading through the file is slow
and not very DRY. Switch to taking an array of checkable functions
instead.
Not all functions are straightforward to move, since different macro
prefixes are used.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As Matthias Sohn, JGit maintainer, recommends, update the JGit
download link from repo.eclipse.org to a one in maven.org
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
An earlier fix added an extra message immediately after failing to
download a third-party package. But near the end of the script,
their availability is checked again and given a message.
Remove the new ones added with a recent fix, as they are redundant.
If we were to add more places to download these software (e.g. for
other platforms we currently do not download them on), the existing
warnning near the end of the script will also trigger.
While at it, as Dscho suggests, rewrite the WARNING: label on the
warning message to ::warning::, which presumably should be shown a
bit more prominently in the CI summary.
Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
|
|
Various build tweaks, including CSPRNG selection on some platforms.
* rj/build-tweaks:
config.mak.uname: set CSPRNG_METHOD to getrandom on Linux
config.mak.uname: add arc4random to the cygwin build
config.mak.uname: add sysinfo() configuration for cygwin
builtin/gc.c: correct RAM calculation when using sysinfo
config.mak.uname: add clock_gettime() to the cygwin build
config.mak.uname: add HAVE_GETDELIM to the cygwin section
config.mak.uname: only set NO_REGEX on cygwin for v1.7
config.mak.uname: add a note about NO_STRLCPY for Linux
Makefile: remove NEEDS_LIBRT build variable
meson.build: set default help format to html on windows
meson.build: only set build variables for non-default values
Makefile: only set some BASIC_CFLAGS when RUNTIME_PREFIX is set
meson.build: remove -DCURL_DISABLE_TYPECHECK
|
|
Update parse-options API to catch mistakes to pass address of an
integral variable of a wrong type/size.
* ps/parse-options-integers:
parse-options: detect mismatches in integer signedness
parse-options: introduce precision handling for `OPTION_UNSIGNED`
parse-options: introduce precision handling for `OPTION_INTEGER`
parse-options: rename `OPT_MAGNITUDE()` to `OPT_UNSIGNED()`
parse-options: support unit factors in `OPT_INTEGER()`
global: use designated initializers for options
parse: fix off-by-one for minimum signed values
|
|
Document the convention to disable hooks altogether by setting the
hooksPath configuration variable to /dev/nulll
* ds/doc-disable-hooks:
docs: document core.hooksPath=/dev/null
|
|
Code clean-up.
* ps/object-file-cleanup:
object-store: merge "object-store-ll.h" and "object-store.h"
object-store: remove global array of cached objects
object: split out functions relating to object store subsystem
object-file: drop `index_blob_stream()`
object-file: split up concerns of `HASH_*` flags
object-file: split out functions relating to object store subsystem
object-file: move `xmmap()` into "wrapper.c"
object-file: move `git_open_cloexec()` to "compat/open.c"
object-file: move `safe_create_leading_directories()` into "path.c"
object-file: move `mkdir_in_gitdir()` into "path.c"
|
|
Test updates.
* aw/t9811-modernize:
t9811: fix misconversion of tests
t9811: be more precise to check importing of tags
|
|
Make sure outage of third-party sites that supply P4, Git-LFS, and
JGit we use for testing would not prevent our CI jobs from running
at all.
* jc/ci-skip-unavailable-external-software:
ci: skip unavailable external software
|
|
Ever since we issued 2.49, external forces broke our CI jobs in
various ways, and we had to adjust our code to work them around.
Backmerge them from the 'master' front to make it easier to test
real changes to the maintenance track.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Make sure outage of third-party sites that supply P4, Git-LFS, and
JGit we use for testing would not prevent our CI jobs from running
at all.
* jc/ci-skip-unavailable-external-software:
ci: skip unavailable external software
|
|
Work around CI breakage due to fedora base image getting updated.
* js/ci-fedora-gawk:
ci(pedantic): ensure that awk is installed
|
|
Adjust to the deprecation of use of Ubuntu 20.04 GitHub Actions CI.
* js/ci-github-update-ubuntu:
ci: upgrade `sparse` to supported build agents
|
|
Squelch false-positive from sparse.
* dd/sparse-glibc-workaround:
sparse: ignore warning from new glibc headers
|
|
The ci/install-dependencies.sh script used in a very early phase of
our CI jobs downloads Perforce, Git-LFS, and JGit, used for running
the test scripts. The test framework is prepared to properly skip
the tests that depend on these external software, but the CI script
is unnecessarily strict (due to its use of "set -e" in ci/lib.sh)
and fails the entire CI run before even starting to test the rest of
the system.
Notice a failure to download to any of these external software, but
keep going. We need to be careful about cleaning after a failed
wget, as a later part of the script that does:
if type jgit >/dev/null 2>&1
then
echo "$(tput setaf 6)JGit Version$(tput sgr0)"
jgit version
else
echo >&2 "WARNING: JGit wasn't installed, see above for clues why"
fi
will (surprise!) succeed running "type jgit", and then fail with
"jgit version", taking the whole thing down due to "set -e".
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* ps/object-file-cleanup:
object-store: merge "object-store-ll.h" and "object-store.h"
object-store: remove global array of cached objects
object: split out functions relating to object store subsystem
object-file: drop `index_blob_stream()`
object-file: split up concerns of `HASH_*` flags
object-file: split out functions relating to object store subsystem
object-file: move `xmmap()` into "wrapper.c"
object-file: move `git_open_cloexec()` to "compat/open.c"
object-file: move `safe_create_leading_directories()` into "path.c"
object-file: move `mkdir_in_gitdir()` into "path.c"
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git log --{left,right}-only A...B", when A and B does not share
any common ancestor, now behaves as expected.
* mh/left-right-limited:
revision: fix --left/right-only use with unrelated histories
|
|
Work around false positive from CodeQL checker.
* js/range-check-codeql-workaround:
read-cache: check range before dereferencing an array element
|
|
Doc mark-up updates.
* ja/doc-reset-mv-rm-markup-updates:
doc: add markup for characters in Guidelines
doc: fix asciidoctor synopsis processing of triple-dots
doc: convert git-mv to new documentation format
doc: move synopsis git-mv commands in the synopsis section
doc: convert git-rm to new documentation format
doc: fix synopsis analysis logic
doc: convert git-reset to new documentation format
|
|
Optimize the code to dedup references recorded in a bundle file.
* kn/bundle-dedup-optim:
bundle: fix non-linear performance scaling with refs
t6020: test for duplicate refnames in bundle creation
|
|
"make perf" fixes.
* pb/perf-test-fixes:
p7821: fix instructions for testing with threads
p9210: fix 'scalar clone' when running from a detached HEAD
p7821: fix test_perf invocation for prereqs
|
|
When using the launchctl scheduler, the weekly job runs daily, and the
daily job runs on the first six days of each month. This appears to be
due to specifying "Day" in the calendar intervals, which according to
launchd.plist(5) is for specifying days of the month rather than days of
the week. The behaviour of running a job on the 0th day is undocumented,
but in my testing appears to be the same as not specifying "Day" in the
calendar interval, in which case the job will run daily.
Use "Weekday" in the calendar intervals, which is the correct way to
schedule jobs to run on specific days of the week.
Signed-off-by: Josh Heinrichs <joshiheinrichs@gmail.com>
Acked-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The 'hdr-check' target in Meson and makefile is used to check if headers
can be compiled individually. The naming however isn't readable as 'hdr'
is not a common shortforme for 'header', neither is it an abbreviation.
Let's introduce 'check-headers' as an alternative target for 'hdr-check'
and add a `TODO` to deprecate the latter after 2 releases. Since this
is an internal tool, we can use a shorter deprecation cycle.
Change existing usage of 'hdr-check' in 'ci/run-static-analysis.sh' to
also use 'check-headers'.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The Makefile supports a target called 'hdr-check', which checks if
individual header files can be independently compiled. Let's port this
functionality to Meson, our new build system too. The implementation
resembles that of the Makefile and provides the same check.
Since meson builds are out-of-tree, header dependencies are not
automatically met. So unlike the Makefile version, we also need to add
the required dependencies.
Also add the 'xdiff/' dir to the list of 'third_party_sources' as those
headers must be skipped from the checks too. This also skips the folder
from the 'coccinelle' checks, this is okay, since this code is an
external dependency.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The 'third_party_sources' variable was moved to the root 'meson.build'
file in the previous commit. The variable is actually used to exclude
third party sources, so rename it accordingly to 'third_party_excludes'
to avoid confusion. While here, remove a duplicate from the list.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The Meson build for coccinelle static analysis lists all headers to
analyse. Due to the way Meson exports variables between subdirs, this
variable is also available in the root Meson build.
An upcoming commit, will add a new check complimenting 'hdr-check' in
the Makefile. This would require the list of headers. So move the
'coccinelle_headers' to the root Meson build and rename it to 'headers',
remove the root path being appended to each header and retain that in
the coccinelle Meson build since it is specific to the coccinelle build.
Also move the 'third_party_sources' variable to the root Meson build
since it is also a dependency for the 'headers' variable. This also
makes it easier to understand as the variable is now propagated from the
top level to the bottom.
While 'headers_to_check' is only computed when we have a repository and
the 'git' executable is present, the variable itself is exposed as an
empty array. This allows dependencies in upcoming commits to simply
check for length of the array and not worry about dependencies required
to actually populate the array.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In Meson, included subdirs export their variables to top level Meson
builds. In 'contrib/coccinelle/meson.build', we define two such
variables `sources` and `headers`. While these variables are specific to
the checks in the 'contrib/coccinelle/' directory, they also pollute the
top level 'meson.build'.
Rename them to be more specific, this ensures that they aren't
mistakenly used in the upper levels and avoid variable name collisions.
While here, change the empty list denotation to be consistent with other
places.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The GitHub's CI workflow uses 'actions/checkout@v4' to checkout the
repository. This action defaults to using the GitHub REST API to obtain
the repository if the `git` executable isn't available.
The step to build Git in the GitHub workflow can be summarized as:
...
- uses: actions/checkout@v4 #1
- run: ci/install-dependencies.sh #2
...
- run: sudo --preserve-env --set-home --user=builder ci/run-build-and-tests.sh #3
...
Step #1, clones the repository, since the `git` executable isn't present
at this step, it uses GitHub's REST API to obtain a tar of the
repository.
Step #2, installs all dependencies, which includes the `git` executable.
Step #3, sets up the build, which includes setting up meson in the meson
job. At this point the `git` executable is present.
This means while the `git` executable is present, the repository doesn't
contain the '.git' folder. To keep both the CI's (GitLab and GitHub)
behavior consistent and to ensure that the build is performed on a
real-world scenario, install `git` before the repository is checked out.
This ensures that 'actions/checkout@v4' will clone the repository
instead of using a tarball. We also update the package cache while
installing `git`, this is because some distros will fail to locate the
package without updating the cache.
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Just as in b64d78ad02ca (max_tree_depth: lower it for MSVC to avoid
stack overflows, 2023-11-01), I encountered the same problem with the
clang builds on Windows/ARM64.
The symptom is an exit code 127 when t6700 tries to verify that `git
archive big` fails.
This exit code is reserved on Unix/Linux to mean "command not found".
Unfortunately in this case, it is the fall-back chosen by
Cygwin's `pinfo::status_exit()` method when encountering
the NSTATUS `STATUS_STACK_OVERFLOW`, see
https://github.com/cygwin/cygwin/blob/cygwin-3.6.1/winsup/cygwin/pinfo.cc#L171
I verified manually that the stack overflow always happens somewhere
around tree depth 1403, therefore 1280 should be a safe bound in these
instances.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In fb5e3378f8 (mingw: move Git for Windows' system config where users
expect it, 2021-06-22), I moved the location of Git for Windows' system
config and system Git attributes file to the top-level `/etc/` directory
(because it is a much more obvious location than, say, `/mingw64/etc/`).
The patch relied on a very specific scenario that the newly-supported
Windows/ARM64 builds of `git.exe` fails to fall into. So let's broaden
the condition a bit, so that Windows/ARM64 builds also use that location
(instead of the even more obscure `/clangarm64/etc/` directory).
This fixes https://github.com/git-for-windows/git/issues/5431.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git for Windows/ARM64 settled on using `clang` to compile `git.exe`, and
hence needs to run in a system where `MSYSTEM` is set to `CLANGARM64`
and the prefix to use is `/clangarm64`.
We already did that in the `MINGW` arm, i.e. for regular Git for Windows
builds using MINGW GCC (or `clang`'s shim pretending to be GCC), now it
is time to do the same in the MS Visual C part.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
[jc: adjust config.mak.uname for c18400c6]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It does not compile there, and seeing as nedmalloc has been pretty much
unmaintained since at least November 2017, as per
https://github.com/ned14/nedmalloc/issues/20#issuecomment-343432314,
there is also no hope that any fixes will materialize there.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
[jc: adjust config.mak.uname for c18400c6]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
CLANGARM64 is a relatively new MSYSTEM added by the MSYS2 team. In order
to have Git build correctly for this platform, let's add some
configuration for it to config.mak.uname.
Signed-off-by: Dennis Ameling <dennis@dennisameling.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Newer compiler versions, like GCC 10 and Clang 12, have built-in
functions for bswap32 and bswap64. This comes in handy, for example,
when targeting CLANGARM64 on Windows, which would not be supported
without this logic.
Signed-off-by: Dennis Ameling <dennis@dennisameling.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Remove the log_reencode field from struct rev-info, as it is not used.
This field was introduced in 52883fb, but it hasn't been used since its
introduction.
Helped-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This perf script creates a midx by running "git multi-pack-index write"
with the "--stdin-packs" option. We feed that stdin by running "find" on
.git/objects/pack, using sed to strip off everything but the basename.
But that sed invocation also does something peculiar: it adds a "+" to
the start of each pack name. This causes the multi-pack-index command to
barf. The modified name does not match any pack it knows about, so it
ends up with an empty list of packs to put in the midx. And thus nothing
matches the --preferred-pack option we pass, which causes it die().
The fix is to remove the extra "+" (which also lets us simplify the sed
invocation a bit, as it is now just stripping the leading directories).
But that leaves the mystery of why it was ever there in the first place.
The answer is that an earlier iteration of the patch series had a
concept of "disjoint" packs in the midx. And one of its patches here:
https://lore.kernel.org/git/c52d7e7b27a9add4f58b8334db4fe4498af1c90f.1701198172.git.me@ttaylorr.com/
taught read_packs_from_stdin() to treat a leading "+" as marking a
disjoint pack. But in the second version of the series, which was
ultimately merged, that disjoint concept went away, and the code to
parse "+" did likewise. The regular regression tests were adjusted to
match, but this case in t/perf was forgotten.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The shell completion scripts in "contrib/completion" are being tested,
but none of our build systems support installing them. This is somewhat
confusing for Meson, where users can explicitly enable building these
scripts via `-Dcontrib=completion`. This option only controlls whether
the completions are built and tested against, where "building" is a bit
of an euphemism for "copying them into the build directory".
Teach both our Makefile and Meson to install our Bash completion script.
For now, this is the only completion script that we're installing given
that Bash completions "just work" with a canonical well-known location
nowadays. Other completion scripts, like for example the one for zsh,
don't have a well-known location and/or require extra steps by the user
to make them available. As such, we skip installing these scripts for
now, but we may do so in the future if we ever figure out a proper way
to do this.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Our tests for git-p4(1) depend on the p4d(1) and p4(1) executables to
exist. As we require specific versions of those binaries which typically
aren't available on common distributions, we install them manually via
"ci/install-dependencies.sh".
This script will put the binaries into "$CUSTOM_PATH", which gets
defined by "ci/lib.sh" -- if not explicitly overridden, its value will
be set to "$HOME/path". This causes issues though when running our tests
as unprivileged user, as we do both in GitLab CI and GitHub Actions,
because "$HOME" will be different when installing dependencies and when
running the tests. Consequently, the downloaded binaries will not be
found unless "$CUSTOM_PATH" is overridden to a common location.
We already do this for GitLab CI, where it points to "/custom". Let's do
the same for GitHub Actions so that Perforce-based tests are executed
again.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since we already teach the `repo_config()` in "f29f1990b5
(config: teach repo_config to allow `repo` to be NULL, 2025-03-08)"
to allow `repo` to be NULL, no need to check if `repo` is NULL
before calling `repo_config()`.
Suggested-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Usman Akinyemi <usmanakinyemi202@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since we already teach the `repo_config()` in "f29f1990b5
(config: teach repo_config to allow `repo` to be NULL, 2025-03-08)"
to allow `repo` to be NULL, no need to check if `repo` is NULL
before calling `repo_config()`.
Suggested-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Usman Akinyemi <usmanakinyemi202@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A common way to run Git's performance benchmarks on repositories other
than Git's own repository (which is not exactly large when compared to
actually large repositories) is to run them like this:
GIT_PERF_LARGE_REPO=/path/to/my/large/repo \
./p1234-*.sh -ivx
Contrary to developers' common expectations, this failed to work when
Git was built with a different `GIT_PERF_LARGE_REPO` value specified at
build time: That build-time option would have been written to the
`GIT-BUILD-OPTIONS` file, which in turn would have been sourced by
`test-lib.sh`, which in turn would have been sourced by `perf-lib.sh`,
which in turn would have been sourced by the perf test script,
_overriding_ the environment variable specified in the way illustrated
above.
Since perf tests are not run as part of the build, this most likely
unintended behavior was not caught and certainly not fixed, as the
`GIT_PERF_*` values would have been empty at build-time.
However, in 4638e8806e3a (Makefile: use common template for
GIT-BUILD-OPTIONS, 2024-12-06), a subtle change of behavior was
introduced: Whereas before, a couple of build-time options (the
`GIT_PERF_*` ones included) were written to `GIT-BUILD-OPTIONS` only
when their values were non-empty. With this commit, they are also
written when they are empty.
The consequence is that above-mentioned way to run the perf tests will
not only fail to pick up the desired `GIT_PERF_*` settings when they
were specified differently while building Git, instead the desired
settings will be only respected when specified _while building_ Git.
Let's work around the original issue, i.e. let `GIT_PERF_*` environment
variables override what is recorded in `GIT-BUILD-OPTIONS`.
Note that this is just the tip of the iceberg, there are a couple of
`GIT_TEST_*` options that may want a similar fix in `test-lib.sh`. Due
to time constraints on my side, this here patch focuses exclusively on
the `GIT_PERF_*` settings.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* ob/strip-comments-on-commit:
git-gui: heed core.commentChar/commentString
|
|
The previous commit started to insist TAG_F1_ONLY to be missing,
which was not in the original. Let's not be overly eager in the
conversion.
Also, the other hunk in the commit introduced a shell syntax error,
causing the test to fail. Fix it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Above the declaration of git_work_tree_cfg, we have:
/* This is set by setup_git_dir_gently() and/or git_default_config() */
char *git_work_tree_cfg;
It can be verified that there is no function called
'setup_git_dir_gently' by running grep on the codebase:
$ grep -R setup_git_dir_gently .
./environment.c:/* This is set by setup_git_dir_gently() and/or git_default_config() */
The comment, introduced in e90fdc39b6 (Clean up work-tree handling), is
the only occurrence of the name 'setup_git_dir_gently'.
It probably meant 'setup_git_directory_gently' as that is a name of a
real function in setup.c. Correct it.
Signed-off-by: Abhijeet Sonar <abhijeet.nkt@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 05cd988dce ("wrapper: add a helper to generate numbers from a
CSPRNG", 2022-01-17) added a csprng_bytes() function which used one
of several interfaces to provide a source of cryptographically secure
pseudorandom numbers. The CSPRNG_METHOD make variable was provided to
determine the choice of available 'backends' for the source of random
bytes.
Commit 05cd988dce did not set CSPRNG_METHOD in the Linux section of
the config.mak.uname file, so it defaults to using '/dev/urandom' as
the source of random bytes. The 'backend' values which could be used
on Linux are 'arc4random', 'getrandom' or 'getentropy' ('openssl' is
an option, but seems to be discouraged).
The arc4random routines (arc4random_buf() is the one actually used) were
added to glibc in version 2.36, while both getrandom() and getentropy()
were included in 2.25. So, some of the more up-to-date distributions of
Linux (eg Debian 12, Ubuntu 24.04) would be able to use the 'arc4random'
setting. All currently supported distributions have glibc 2.25 or later
(RHEL 8 has v2.28) and, therefore, have support for the 'getrandom' and
'getentropy' settings.
The arc4random routines on the *BSDs (along with cygwin) implement the
ChaCha20 stream cipher algorithm (see RFC8439) in userspace, rather than
as a system call, and are thus somewhat faster (having avoided a context
switch to the kernel). In contrast, on Linux all three functions are
simple wrappers around the same kernel CSPRNG syscall.
If the meson build system is used on a newer platform, then they will be
configured to use 'arc4random', whereas the make build will currently
default to using '/dev/urandom' on Linux. Since there is no advantage,
in terms of performance, to the 'arc4random' setting, the 'getrandom'
setting should be preferred from an availability perspective. (Also, the
current uses of csprng_bytes() are not in any hot path).
In order to set an appropriate default, set the CSPRNG_METHOD build
variable to 'getrandom' in the Linux section of the 'config.mak.uname'
file.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Code clean-up.
* ab/environment-clean-header:
environment.h: remove unused variables
|
|
Incorrect sorting of refs with bytes with high-bit set on platforms
with signed char led to a BUG, which has been corrected.
* ps/refname-avail-check-optim:
refs/packed: fix BUG when seeking refs with UTF-8 characters
|
|
Comment fix.
* cj/refname-avail-check-optim-typofix:
refs: fix duplicated word in comment
|
|
Code simplification.
* ua/update-update-server-info:
builtin/update-server-info: remove unnecessary if statement
|
|
Remove remnants of the recursive merge strategy backend, which was
superseded by the ort merge strategy.
* en/merge-recursive-debug:
builtin/{merge,rebase,revert}: remove GIT_TEST_MERGE_ALGORITHM
tests: remove GIT_TEST_MERGE_ALGORITHM and test_expect_merge_algorithm
merge-recursive.[ch]: thoroughly debug these
merge, sequencer: switch recursive merges over to ort
sequencer: switch non-recursive merges over to ort
merge-ort: enable diff-algorithms other than histogram
builtin/merge-recursive: switch to using merge_ort_generic()
checkout: replace merge_trees() with merge_ort_nonrecursive()
|
|
"git blame --porcelain" mode now talks about unblamable lines and
lines that are blamed to an ignored commit.
* kn/blame-porcelain-unblamable:
blame: print unblamable and ignored commits in porcelain mode
|
|
"git fetch [<remote>]" with only the configured fetch refspec
should be the only thing to update refs/remotes/<remote>/HEAD,
but the code was overly eager to do so in other cases.
* jk/fetch-follow-remote-head-fix:
fetch: make set_head() call easier to read
fetch: don't ask for remote HEAD if followRemoteHEAD is "never"
fetch: only respect followRemoteHEAD with configured refspecs
|
|
It was reported that "t5620-backfill.sh" fails on s390x and sparc64 in a
test that exercises the "--min-batch-size" command line option. The
symptom was that the option didn't seem to have an effect: we didn't
fetch objects with a batch size of 20, but instead fetched all objects
at once.
As it turns out, the root cause is that `--min-batch-size` uses
`OPT_INTEGER()` to parse the command line option. While this macro
expects the caller to pass a pointer to an integer, we instead pass a
pointer to a `size_t`. This coincidentally works on most platforms, but
it breaks apart on the mentioned platforms because they are big endian.
This issue isn't specific to git-backfill(1): there are a couple of
other places where we have the same type confusion going on. This
indicates that the issue really is the interface that the parse-options
subsystem provides -- it is simply too easy to get this wrong as there
isn't any kind of compiler warning, and things just work on the most
common systems.
Address the systemic issue by introducing two new build asserts
`BARF_UNLESS_SIGNED()` and `BARF_UNLESS_UNSIGNED()`. As the names
already hint at, those macros will cause a compiler error when passed a
value that is not signed or unsigned, respectively.
Adapt `OPT_INTEGER()`, `OPT_UNSIGNED()` as well as `OPT_MAGNITUDE()` to
use those asserts. This uncovers a small set of sites where we indeed
have the same bug as in git-backfill(1). Adapt all of them to use the
correct option.
Reported-by: Todd Zullinger <tmz@pobox.com>
Reported-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This commit is the equivalent to the preceding commit, but instead of
introducing precision handling for `OPTION_INTEGER` we introduce it for
`OPTION_UNSIGNED`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `OPTION_INTEGER` option type accepts a signed integer. The type of
the underlying integer is a simple `int`, which restricts the range of
values accepted by such options. But there is a catch: because the
caller provides a pointer to the value via the `.value` field, which is
a simple void pointer. This has two consequences:
- There is no check whether the passed value is sufficiently long to
store the entire range of `int`. This can lead to integer wraparound
in the best case and out-of-bounds writes in the worst case.
- Even when a caller knows that they want to store a value larger than
`INT_MAX` they don't have a way to do so.
In practice this doesn't tend to be a huge issue because users typically
don't end up passing huge values to most commands. But the parsing logic
is demonstrably broken, and it is too easy to get the calling convention
wrong.
Improve the situation by introducing a new `precision` field into the
structure. This field gets assigned automatically by `OPT_INTEGER_F()`
and tracks the size of the passed value. Like this it becomes possible
for the caller to pass arbitrarily-sized integers and the underlying
logic knows to handle it correctly by doing range checks. Furthermore,
convert the code to use `strtoimax()` intstead of `strtol()` so that we
can also parse values larger than `LONG_MAX`.
Note that we do not yet assert signedness of the passed variable, which
is another source of bugs. This will be handled in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
With the preceding commit, `OPT_INTEGER()` has learned to support unit
factors. Consequently, the major differencen between `OPT_INTEGER()` and
`OPT_MAGNITUDE()` isn't the support of unit factors anymore, as both of
them do support them now. Instead, the difference is that one handles
signed and the other handles unsigned integers.
Adapt the name of `OPT_MAGNITUDE()` accordingly by renaming it to
`OPT_UNSIGNED()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There are two main differences between `OPT_INTEGER()` and
`OPT_MAGNITUDE()`:
- The former parses signed integers whereas the latter parses unsigned
integers.
- The latter parses unit factors like 'k', 'm' or 'g'.
While the first difference makes obvious sense, there isn't really a
good reason why signed integers shouldn't support unit factors, too.
This inconsistency will also become a bit of a problem with subsequent
commits, where we will fix a couple of callsites that pass an unsigned
integer to `OPT_INTEGER()`. There are three options:
- We could adapt those users to instead pass a signed integer, but
this would needlessly extend the range of accepted integer values.
- We could convert them to use `OPT_MAGNITUDE()`, as it only accepts
unsigned integers. But now we have the inconsistency that we also
start to accept unit factors.
- We could introduce `OPT_UNSIGNED()` as equivalent to `OPT_INTEGER()`
so that it knows to only accept unsigned integers without unit
suffix.
Introducing a whole new option type feels a bit excessive. There also
isn't really a good reason why `OPT_INTEGER()` cannot be extended to
also accept unit factors: all valid values passed to such options cannot
have a unit factors right now, so there wouldn't be any ambiguity.
Refactor `OPT_INTEGER()` to use `git_parse_int()`, which knows to
interpret unit factors. This removes the inconsistency between the
signed and unsigned options so that we can easily fix up callsites that
pass the wrong integer type right now.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While we expose macros for most of our different option types understood
by the "parse-options" subsystem, not every combination of fields that
has one as that would otherwise quickly lead to an explosion of macros.
Instead, we just initialize structures manually for those variants of
fields that don't have a macro.
Callsites that open-code these structure initialization don't use
designated initializers though and instead just provide values for each
of the fields that they want to initialize. This has three significant
downsides:
- Callsites need to specify all values up to the last field that they
care about. This often includes fields that should simply be left at
their default zero-initialized state, which adds distraction.
- Any reader not deeply familiar with the layout of the structure
has a hard time figuring out what the respective initializers mean.
- Reordering or introducing new fields in the middle of the structure
is impossible without adapting all callsites.
Convert all sites to instead use designated initializers, which we have
started using in our codebase quite a while ago. This allows us to skip
any default-initialized fields, gives the reader context by specifying
the field names and allows us to reorder or introduce new fields where
we want to.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We accept a maximum value in `git_parse_signed()` that restricts the
range of accepted integers. As the intent is to pass `INT*_MAX` values
here, this maximum doesn't only act as the upper bound, but also as the
implicit lower bound of the accepted range.
This lower bound is calculated by negating the maximum. But given that
the maximum value of a signed integer with N bits is `2^(N-1)-1` whereas
the minimum value is `-2^(N-1)` we have an off-by-one error in the lower
bound.
Fix this off-by-one error by using `-max - 1` as lower bound instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The arc4random_buf() function has been available in cygwin since
about 2016 (somewhere in the v2.x branch). Set the CSPRNG_METHOD
build variable to 'arc4random', in the cygwin section, to enable
the use of this cryptographically-secure pseudorandom number
function. Note that the autoconf and new meson builds also enable
this function.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Although sysinfo() is a 'Linux only' function, cygwin provides an
implementation which appears to be functional. The assumption that
this function is Linux only is reflected in the way the HAVE_SYSINFO
build variable is handled by the Makefile and config.mak.uname.
Rework the setting of HAVE_SYSINFO in the Linux section of the system
specific config file, along with the corresponding setting of the
BASIC_CFLAGS in the Makefile. Add the setting of HAVE_SYSINFO to the
cygwin section of 'config.mak.uname'. While here, add a test for the
sysinfo() function to the autoconf build system.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The man page for sysinfo(2) on Linux states that (from v2.3.48) the
sizes of the memory and swap fields, of the returned structure, are
given as multiples of 'mem_unit' bytes. In earlier versions (prior to
v2.3.23 on i386 in particular), the 'mem_unit' field was not part of
the structure, and all sizes were measured in bytes. The man page does
not discuss the motivation for this change, but it is possible that the
change was intended for the, relatively rare, 32-bit platform with more
than 4GB of memory.
The total_ram() function makes the assumption that the 'totalram' field
of the 'struct sysinfo' is measured in bytes, or alternatively that the
'mem_unit' field is always equal to one. Having writen a program to call
the sysinfo() function and print the structure fields, it seems that, on
Linux x84_64 and i686 anyway, the 'mem_unit' field is indeed set to one
(note that the 32-bit system had only 2GB ram). However, cygwin also has
an sysinfo() implementation, which gives the following values:
$ ./sysinfo
uptime: 21381
loads: 0, 0, 0
total ram: 2074637
free ram: 843237
shared ram: 0
buffer ram: 0
total swap: 327680
free swap: 306932
procs: 15
total high: 0
free high: 0
mem_unit: 4096
total ram: 8497713152
$
[This laptop has 8GB ram, so a little bit seems to be missing. ;) ]
Modify the total_ram() function to allow for the possibility that the
memory size is not specified in bytes (ie 'mem_unit' is greater than
one).
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Cygwin supports the clock_gettime() function, along with the associated
CLOCK_MONOTONIC preprocessor symbol. The autoconf and meson builds both
enable the use of those symbols. In order to have the same configuration
for the make builds, add the HAVE_CLOCK_GETTIME and HAVE_CLOCK_MONOTONIC
build variables to the cygwin section of the config.mak.uname file.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Cygwin has provided the getdelim() function as far back as (at least)
2011. The autoconf and meson builds enable the use of this symbol.
In order to have the same configuration for autoconf, meson and make,
enable the HAVE_GETDELIM build variable in the cygwin section of the
config.mak.uname file.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 92f63d2b05 ("Cygwin 1.7 needs compat/regex", 2013-07-19) set
the NO_REGEX build variable because the platform regex library failed
some of the tests (t4018 and t4034), which passed just fine with the
compat library.
After some time (maybe a year or two), the platform library had been
updated (with an import from FreeBSD, I believe) and now passed the full
test-suite. This would be about the time of the v1.7 -> v2.0 transition
in 2015. I had a patch ready to send, but just didn't get around to
submitting it to the list. At some point in the interim, the official
cygwin git package used the autoconf build system, which sets the
NO_REGEX variable to use the platform regex library functions. The new
meson build system does likewise.
The cygwin platform regex library, in addition to now passing the tests
which formerly failed, now passes an 'test_expect_failure' test in the
t7815-grep-binary test file. In particular, test #12 'git grep .fi a'
which determines that the regex pattern '.' matches a NUL character.
The commit f96e56733a ("grep: use REG_STARTEND for all matching if
available", 2010-05-22) added the test in question, but it does not
give any indication as to why the test was framed as an expected fail,
rather than a 'positive' test that the 'git grep' command fails to
match a NUL. Note that the previous test #11 was also originally
marked in that commit as a 'test_expect_failure', but was flipped to
an 'success' test in commit 7e36de5859 ("t/t7008-grep-binary.sh: un-TODO
a test that needs REG_STARTEND", 2010-08-17).
In order to produce the same NO_REGEX configuration from autoconf, meson
and make, modify config.mak.uname to only set NO_REGEX for cygwin v1.7.
In addition, skip test t7815.12 on cygwin, by adding the !CYGWIN pre-
requisite to the test header, which (among other things) removes an
'...; please update test(s)' comment.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 817151e61a ("Rename safe_strncpy() to strlcpy().", 2006-06-24)
added the NO_STRLCPY make variable to allow the conditional use of
the gitstrlcpy() compat function on those platforms which didn't
provide the 'standard' strlcpy() function.
Recently, in the summer of 2023, the strlcpy() and strlcat() functions
were added to the glibc library (v2.38), so some of the more up-to-date
Linux distributions no longer need to set NO_STRLCPY. For example, both
Ubuntu 24.04 LTS and RHEL 10 beta have glibc v2.39. However, several
distributions, which are still within their support window, have an
earlier version and must still use the 'compat' version of strlcpy().
If the meson or autoconf build systems are used on newer platforms, then
they will be configured to to use strlcpy() from glibc, whereas the make
build will always choose the 'compat' function instead. Add a note to
the config.mak.uname file, in the Linux section, to prompt make users to
override NO_STRLCPY in the config.mak file, if appropriate.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit d19e3a5b21 ("Makefile: add NEEDS_LIBRT to optionally link with
librt", 2016-07-07) introduced the NEEDS_LIBRT build variable to
disassociate the HAVE_CLOCK_GETTIME variable with the unconditional
linking of the librt library. At one time, the clock_gettime() function
was not available as part of the libc library and (on some unix systems)
required linking with librt.
Commit 52fcec75ce ("config.mak.uname: define NEEDS_LIBRT under Linux, for
now", 2016-07-10) set the NEEDS_LIBRT variable in the Linux section of
the config.mak.uname file, since Debian 7 (wheezy) was one of the few
remaining distributions, with glibc 2.13, that required linking with
librt for clock_gettime(). Note that from glibc version 2.17, this is no
longer necessary.
Note that Debian 7.0 was released on May 4th, 2013 and benefited from
long term support until May 2018 when it went end-of-life. Since that
time, Linux distributions use a more up-to-date library, for example:
Distribution version end of support
Debian 8 2.19 30th June 2020
RHEL 8 2.28 31st May 2024 *
Ubuntu 16.04 2.23 30th Apr 2021
* paid 'Maintenance support' ends 31st May 2029
Since it is no longer required, remove NEEDS_LIBRT from the Makefile and
config.mak.uname.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The build variable DEFAULT_HELP_FORMAT has an appropriate default
('man') set in the code, so there is no need to pass the -Define on
the compiler command-line, unless the build requires a non-standard
value.
In addition, on windows the make build overrides the default help
format to 'html', rather than 'man', in the 'config.mak.uname' file.
In order to suppress the -Define on the C compiler command-line, only
add the -Define to the 'libgit_c_args' variable when the requested
value is not the standard 'man'. In order to override the default value
on windows, add a 'platform' value to the 'default_help_format' combo
option and set it as the default choice. When this option is set to
'platform', use the 'host_machine.system()' method call to determine the
appropriate default value for the host system.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some preprocessor -Defines have defaults set in the source code when
they have not been provided to the C compiler. In this case, there is
no need to pass them on the command-line, unless the build requires a
non-standard value.
The build variables for DEFAULT_EDITOR and DEFAULT_PAGER have appropriate
defaults ('vi' and 'less') set in the code. Add the preprocessor -Defines
to the 'libgit_c_args' only if the values set with the corresponding
'options' are different to these standard values.
Also, the 'git-var' documentation contains some conditional text which
documents the chosen compiled in value, which would not read well for
the standard values. Similar to the above, only add the corresponding
'-a' attribute arguments to the 'asciidoc_common_options' variable, if
the values set in the 'options' are different to these standard values.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Several build variables only have any meaning when the RUNTIME_PREFIX
variable has been set. In particular, the following build variables are
otherwise ignored:
HAVE_BSD_KERN_PROC_SYSCTL
PROCFS_EXECUTABLE_PATH
HAVE_NS_GET_EXECUTABLE_PATH
HAVE_ZOS_GET_EXECUTABLE_PATH
HAVE_WPGMPTR
Make setting BASIC_CFLAGS, for each of these variables, conditional on
the RUNTIME_PREFIX being defined.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 9371322a60 ("sparse: suppress some \"using sizeof on a function\"
warnings", 2013-10-06) used target-specific variable assignments to add
-DCURL_DISABLE_TYPECHECK to SPARSE_FLAGS for each of the files affected
by the "typecheck-gcc.h" warnings. (http-push.c, http.c, http-walker.c
and remote-curl.c).
These warnings are only issued by sparse, and not by gcc, so we do not
want to disable the 'type checking' for non-sparse targets. The meson
build does not provide any sparse targets, so there is no need to use
the CURL_DISABLE_TYPECHECK preprocessor flag with the c compiler.
In order to re-enable the curl 'type checking' in the meson build, remove
the assignment of -DCURL_DISABLE_TYPECHECK to libgit_c_args.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git cat-file --batch" and friends learned to allow "--filter=" to
omit certain objects, just like the transport layer does.
* ps/cat-file-filter-batch:
builtin/cat-file: use bitmaps to efficiently filter by object type
builtin/cat-file: deduplicate logic to iterate over all objects
pack-bitmap: introduce function to check whether a pack is bitmapped
pack-bitmap: add function to iterate over filtered bitmapped objects
pack-bitmap: allow passing payloads to `show_reachable_fn()`
builtin/cat-file: support "object:type=" objects filter
builtin/cat-file: support "blob:limit=" objects filter
builtin/cat-file: support "blob:none" objects filter
builtin/cat-file: wire up an option to filter objects
builtin/cat-file: introduce function to report object status
builtin/cat-file: rename variable that tracks usage
|
|
"make test" used to have a hard dependency on (basic) Perl; tests
have been rewritten help environment with NO_PERL test the build as
much as possible.
* ps/test-wo-perl-prereq:
t5703: refactor test to not depend on Perl
t5316: refactor `max_chain()` to not depend on Perl
t0210: refactor trace2 scrubbing to not use Perl
t0021: refactor `generate_random_characters()` to not depend on Perl
t/lib-httpd: refactor "one-time-perl" CGI script to not depend on Perl
t/lib-t6000: refactor `name_from_description()` to not depend on Perl
t/lib-gpg: refactor `sanitize_pgp()` to not depend on Perl
t: refactor tests depending on Perl for textconv scripts
t: refactor tests depending on Perl to print data
t: refactor tests depending on Perl substitution operator
t: refactor tests depending on Perl transliteration operator
Makefile: stop requiring Perl when running tests
meson: stop requiring Perl when tests are enabled
t: adapt existing PERL prerequisites
t: introduce PERL_TEST_HELPERS prerequisite
t: adapt `test_readlink()` to not use Perl
t: adapt `test_copy_bytes()` to not use Perl
t: adapt character translation helpers to not use Perl
t: refactor environment sanitization to not use Perl
t: skip chain lint when PERL_PATH is unset
|
|
"git help --build-options" reports SHA-1 and SHA-256 backends used
in the build.
* jt/help-sha-backend-info-in-build-options:
help: include unsafe SHA-1 build info in version
help: include SHA implementation in version info
|
|
Updating multiple references have only been possible in all-or-none
fashion with transactions, but it can be more efficient to batch
multiple updates even when some of them are allowed to fail in a
best-effort manner. A new "best effort batches of updates" mode
has been introduced.
* kn/non-transactional-batch-updates:
update-ref: add --batch-updates flag for stdin mode
refs: support rejection in batch updates during F/D checks
refs: implement batch reference update support
refs: introduce enum-based transaction error types
refs/reftable: extract code from the transaction preparation
refs/files: remove duplicate duplicates check
refs: move duplicate refname update check to generic layer
refs/files: remove redundant check in split_symref_update()
|
|
Auth-related (and unrelated) error handling in send-email has been
made more robust.
* zy/send-email-error-handling:
send-email: finer-grained SMTP error handling
send-email: capture errors in an eval {} block
|
|
"git maintenance" learns a new task to expire reflog entries.
* ps/maintenance-reflog-expire:
builtin/maintenance: introduce "reflog-expire" task
builtin/gc: split out function to expire reflog entries
builtin/reflog: make functions regarding `reflog_expire_options` public
builtin/reflog: stop storing per-reflog expiry dates globally
builtin/reflog: stop storing default reflog expiry dates globally
reflog: rename `cmd_reflog_expire_cb` to `reflog_expire_options`
|
|
"git rev-list" learns machine-parsable output format that delimits
each field with NUL.
* jt/rev-list-z:
rev-list: support NUL-delimited --missing option
rev-list: support NUL-delimited --boundary option
rev-list: support delimiting objects with NUL bytes
rev-list: refactor early option parsing
rev-list: inline `show_object_with_name()` in `show_object()`
|
|
Some warnings from "-Wsign-compare" for pathspec.c have been
squelched.
* ab/pathspec-sign-compare-workaround:
pathspec: fix sign comparison warnings
|
|
Random build fixes.
* ps/misc-build-fixes:
ci: use Visual Studio for win+meson job on GitHub Workflows
meson: distinguish build and target host binaries
meson: respect 'tests' build option in contrib
gitweb: fix generation of "gitweb.js"
meson: fix handling of '-Dcurl=auto'
|
|
A few traditional unit tests have been rewritten to use the clar
framework.
* sk/clar-trailer-urlmatch-norm-test:
t/unit-tests: convert urlmatch-normalization test to clar
t/unit-tests: convert trailer test to use clar
|
|
Some warnings from "-Wsign-compare" for builtin/rm.c have been
squelched.
* ab/rm-sign-compare:
rm: fix sign comparison warnings
|
|
A ref transaction corner case fix.
* jt/ref-transaction-abort-fix:
builtin/fetch: avoid aborting closed reference transaction
|
|
Work around CI breakage due to fedora base image getting updated.
* js/ci-fedora-gawk:
ci(pedantic): ensure that awk is installed
|
|
Adjust to the deprecation of use of Ubuntu 20.04 GitHub Actions CI.
* js/ci-github-update-ubuntu:
ci: upgrade `sparse` to supported build agents
|
|
Squelch false-positive from sparse.
* dd/sparse-glibc-workaround:
sparse: ignore warning from new glibc headers
|
|
The tests use grep to search the output of `git tag` for tagnames they
expect to exist, which can incorrectly pass if an unxpected tag
has the expected tag as its substring. We fix this by using `git
show-ref --verify` instead.
Additionally, we add a negative test to verify that a possible
uninteded tag does not show up in the imported repository.
This change also fixes an additional problem, where piping the
output of `git tag` caused the exit codes to be lost.
Signed-off-by: Anthony Wang <anthonywang513@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If a user wishes to disable hooks, then they can do so using the
established pattern of setting 'core.hooksPath' to /dev/null. This is
already tested in t1350-config-hooks-path.sh, but has not previously
been visible in the documentation.
Update the documentation to include this as an option.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The "cmd-list.perl" script is used to extract the list of commands part
of a specific category and extracts the description of each command from
its respective manpage. The generated output is then included in git(1)
to list all Git commands.
The script is written in Perl. Refactor it to use shell scripting
exclusively so that we can get rid of the mandatory dependency on Perl
to build our documentation.
The converted script is slower compared to its Perl implementation. But
by being careful and not spawning external commands in `format_one ()`
we can mitigate the performance hit to a reasonable level:
Benchmark 1: Perl
Time (mean ± σ): 10.3 ms ± 0.2 ms [User: 7.0 ms, System: 3.3 ms]
Range (min … max): 10.0 ms … 11.1 ms 200 runs
Benchmark 2: Shell
Time (mean ± σ): 74.4 ms ± 0.4 ms [User: 48.6 ms, System: 24.7 ms]
Range (min … max): 73.1 ms … 75.5 ms 200 runs
Summary
Perl ran
7.23 ± 0.13 times faster than Shell
While a sevenfold slowdown is significant, the benefit of not requiring
Perl for a fully-functioning Git installation outweighs waiting a couple
of milliseconds longer during the build process.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The "fix-texi.perl" script is used to fix up the output of
`docbook2x-texi`:
- It changes the filename to be "git.info".
- It changes the directory category and entry.
The script is written in Perl, but it can be rather trivially converted
to a shell script. Do so to remove the dependency on Perl for building
the user manual.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While git-request-pull(1) is written as a shell script, for it to
function we depend on Perl being available. The script gets installed
unconditionally though, regardless of whether or not Perl is even
available on the system. When it's not available, the `@PERL_PATH@`
variable may be substituted with a nonexistent executable path and thus
cause the script to fail.
Refactor the script so that it does not depend on Perl at all anymore.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While git-filter-branch(1) is written as a shell script, the
`--state-branch` feature depends on Perl to save and extract the object
ID mappings. This can lead to subtle breakage though:
- We execute `perl` directly without respecting the `PERL_PATH`
configured by the distribution. As such, it may happen that we use
the wrong version of Perl.
- We install the script unchanged even if Perl isn't available at all
on the system, so using `--state-branch` would lead to failure
altogether in that case.
Fix this by dropping Perl and instead implementing the feature with
shell scripting exclusively.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The image pointed to by the fedora:latest tag has moved from fedora
41 to 42. The fedora 41 container images have awk installed while
the fedora 42 images do not. That change is most likely just part
of reducing the size of the base container images.
In both AlmaLinux and Fedora (as well as other RHEL
derivatives/relatives), awk is provided by the gawk package.
On Fedora, `dnf install awk` would work, by using the package
filelist data to determine that /usr/bin/awk is provided by gawk and
installs gawk as a result.
On AlmaLinux (8 & 9, by quick testing by Todd), that is not the case
and you'd need to use `dnf install gawk` or `dnf install '*bin/awk'`
to get it installed. Having said that, awk _is_ included in the
current AlmaLinux 8 and 9 images, so it isn't strictly needed. But
it's probably better to be explicit that we need it installed, as a
defense against some future change to the AlmaLinux container
removing awk.
Because we know that on both of these distros, our scripts that call
for 'awk' had been using 'gawk' that was installed as part of the
base image, let's make sure that we explicitly install 'gawk'. If
the image already has it, it would be a no-op that does not cause
breakage.
Suggested-by: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Test fix for an already graduated topic.
* bc/allow-upload-pack-from-other-people:
t5605: fix test for cloning from a different user
|
|
"git-merge-file" documentation source, which has lines that look
like conflict markers, lacked custom conflict marker size defined,
which has been corrected..
* pw/custom-conflict-marker-size-for-merge-related-docs:
merge-file doc: set conflict-marker-size attribute
|
|
Code clean-up.
* js/comma-semicolon-confusion:
detect-compiler: detect clang even if it found CUDA
clang: warn when the comma operator is used
compat/regex: explicitly mark intentional use of the comma operator
wildmatch: avoid using of the comma operator
diff-delta: avoid using the comma operator
xdiff: avoid using the comma operator unnecessarily
clar: avoid using the comma operator unnecessarily
kwset: avoid using the comma operator unnecessarily
rebase: avoid using the comma operator unnecessarily
remote-curl: avoid using the comma operator unnecessarily
|
|
"git clone" still gave the message about the default branch name;
this message has been turned into an advice message that can be
turned off.
* jt/clone-guess-remote-head-fix:
advice: allow disabling default branch name advice
builtin/clone: suppress unexpected default branch advice
remote: allow `guess_remote_head()` to suppress advice
|
|
The job to coalesce loose objects into packfiles in "git
maintenance" now has configurable batch size.
* ds/maintenance-loose-objects-batchsize:
maintenance: add loose-objects.batchSize config
maintenance: force progress/no-quiet to children
|
|
* lo/userdiff-gitconfig:
userdiff: add builtin driver for INI files
|
|
Fix lockfile contention in reftable code on Windows.
* ps/mingw-creat-excl-fix:
compat/mingw: fix EACCESS when opening files with `O_CREAT | O_EXCL`
meson: fix compat sources when compiling with MSVC
|