Message ID | 62d815cfe104e4d172ca4d51e1d56e577f737b87.1523983687.git.yann.morin.1998@free.fr |
---|---|
State | Changes Requested |
Headers | show |
Series | None | expand |
Hello, We can ask for more opinions if it needs to be split in 2 patches or not, so I copied the CC from patch 3. Also some typos and nits. On Tue, Apr 17, 2018 at 01:48 PM, Yann E. MORIN wrote: > We currently attempt a shallow clone, as tentative to save bandwidth and > download time. > > However, now that we keep the git tree as a cache, it may happen that we > need to checkout an earlier commit, and that would not be present with a > shallow clone. > > Furthermore, the shallow fetch is already really btroken, and just s/btroken/broken/ > happens by chance. Consider the following actions, which are basivcally s/basivcally/basically/ [snip] > The checkout succeeds just because of the git-fetch in the if-condition, > which is initially there to fetch the special refs from github MRs, or Nit: mixed naming GitHub -> PR GitLab -> MR Gerrit -> Change > gerrit reviews. That fails, but we jsut print a warning. If we were to s/jsut/just/ > ever remove support for special refs, then the checkout would fail. > > The whole purpose of the git cache is to actually save bandwith and s/bandwith/bandwidth/ > download time, but in the long run. For one-offs, people would > preferably use a wget download (e.g. with the github macro) instead of > a git clone. > > We switch to always doing a full clone. It is nore correct, and pays off s/nore/more/ Nit: Actually using shallow fetch is not less correct, just more complicated because it has a lot of corner cases. But the download script becomes more correct than before by always doing a full clone. > in the long run... [snip] > +printf "Fetching all references\n" > +_git fetch origin The line above could even be a separate patch. It is there to fix the fetch for all refs for git versions older than 1.9.0. Maybe we add a comment about the git version? And/or mention the git version + upstream patch on the commit log, as in http://patchwork.ozlabs.org/patch/897559/ ? > +_git fetch origin -t Regards, Ricardo
diff --git a/support/download/git b/support/download/git index e71ff029cd..6a4d1f937b 100755 --- a/support/download/git +++ b/support/download/git @@ -63,26 +63,9 @@ fi _git remote set-url origin "'${uri}'" -# Try to fetch with limited depth, since it is faster than a full clone - but -# that only works if the version is a ref (tag or branch). Before trying to do -# a shallow clone we check if ${cset} is in the list provided by git ls-remote. -# If not we fallback to a full fetch. -# -# Messages for the type of clone used are provided to ease debugging in -# case of problems -git_done=0 -if [ -n "$(_git ls-remote origin "'${cset}'" 2>&1)" ]; then - printf "Doing a shallow fetch\n" - if _git fetch "${@}" --depth 1 origin "'${cset}'"; then - git_done=1 - else - printf "Shallow fetch failed, falling back to fetching all refs\n" - fi -fi -if [ ${git_done} -eq 0 ]; then - printf "Fetching all references\n" - _git fetch origin -t -fi +printf "Fetching all references\n" +_git fetch origin +_git fetch origin -t # Try to get the special refs exposed by some forges (pull-requests for # github, changes for gerrit...). There is no easy way to know whether
We currently attempt a shallow clone, as tentative to save bandwidth and download time. However, now that we keep the git tree as a cache, it may happen that we need to checkout an earlier commit, and that would not be present with a shallow clone. Furthermore, the shallow fetch is already really btroken, and just happens by chance. Consider the following actions, which are basivcally what happens today: mkdir git git init git cd git git remote add origin https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git git fetch origin --depth 1 v4.17-rc1 if ! git fetch origin v4.17-rc1:v4.17-rc1 ; then echo "warning" fi git checkout v4.17-rc1 The checkout succeeds just because of the git-fetch in the if-condition, which is initially there to fetch the special refs from github MRs, or gerrit reviews. That fails, but we jsut print a warning. If we were to ever remove support for special refs, then the checkout would fail. The whole purpose of the git cache is to actually save bandwith and download time, but in the long run. For one-offs, people would preferably use a wget download (e.g. with the github macro) instead of a git clone. We switch to always doing a full clone. It is nore correct, and pays off in the long run... Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr> Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com> --- support/download/git | 23 +++-------------------- 1 file changed, 3 insertions(+), 20 deletions(-)