我在谷歌上搜索过,找到了很多解决方案,但没有一个适合我。
我试图通过连接到LAN网络中的远程服务器从一台机器克隆。
在另一台机器上运行此命令会导致错误。
但是使用git运行相同的克隆命令://192.168.8.5…在服务器上,这是正常的并且成功的。
有什么想法吗?
user@USER ~
$ git clone -v git://192.168.8.5/butterfly025.git
Cloning into 'butterfly025'...
remote: Counting objects: 4846, done.
remote: Compressing objects: 100% (3256/3256), done.
fatal: read error: Invalid argument, 255.05 MiB | 1.35 MiB/s
fatal: early EOF
fatal: index-pack failed
我已经在.gitconfig中添加了这个配置,但也没有帮助。
使用git版本为1.8.5.5.2 .msysgit.0
[core]
compression = -1
与此相关,仅在没有根访问权限并手动从RPM(使用rpm2cpio)或其他包(.deb, ..)中提取Git到子文件夹时有用。典型的用例:您尝试在公司服务器上使用更新版本的Git而不是过时版本的Git。
如果git克隆失败,导致fatal: index-pack失败,但没有早期的EOF提示,而是提示使用:git index-pack的帮助消息,说明版本不匹配,你需要使用——exec-path参数运行git:
git --exec-path=path/to/subfoldered/git/usr/bin/git clone <repo>
为了让这个过程自动发生,在~/.bashrc中指定:
export GIT_EXEC_PATH=path/to/subfoldered/git/usr/libexec
请注意,Git 2.13.x/2.14(2017年第三季度)确实引发了默认核心。影响git取回的packedGitLimit:
在较大的平台上,默认的package -git限制值已经提高(从8 GiB提高到32 GiB),以避免“gc”并行运行时“git获取”(可恢复的)失败。
参见David Turner (csusbdt)的commit be4ca29(2017年4月20日)。
帮助:Jeff King (peff)。
(由Junio C Hamano—gitster—在commit d97141b中合并,2017年5月16日)
Increase core.packedGitLimit
When core.packedGitLimit is exceeded, git will close packs.
If there is a repack operation going on in parallel with a fetch, the fetch
might open a pack, and then be forced to close it due to packedGitLimit being hit.
The repack could then delete the pack out from under the fetch, causing the fetch to fail.
Increase core.packedGitLimit's default value to prevent this.
On current 64-bit x86_64 machines, 48 bits of address space are available.
It appears that 64-bit ARM machines have no standard amount of address space (that is, it varies by manufacturer), and IA64 and POWER machines have the full 64 bits.
So 48 bits is the only limit that we can reasonably care about. We reserve a few bits of the 48-bit address space for the kernel's use (this is not strictly
necessary, but it's better to be safe), and use up to the remaining 45.
No git repository will be anywhere near this large any time soon, so this should prevent the failure.