如果我在使用git时发生了哈希冲突,会发生什么?
例如,我设法提交了两个具有相同sha1校验和的文件, git会注意到它还是会破坏其中一个文件?
git是否可以改进以适应这种情况,或者我必须更改为一个新的哈希算法?
(请不要通过讨论这有多不可能来转移这个问题-谢谢)
如果我在使用git时发生了哈希冲突,会发生什么?
例如,我设法提交了两个具有相同sha1校验和的文件, git会注意到它还是会破坏其中一个文件?
git是否可以改进以适应这种情况,或者我必须更改为一个新的哈希算法?
(请不要通过讨论这有多不可能来转移这个问题-谢谢)
当前回答
用正确的“但是”来回答这个问题,而不解释为什么这不是一个问题是不可能的。如果没有很好地理解哈希是什么,是不可能做到这一点的。它比你在计算机科学课程中接触到的简单情况要复杂得多。
There is a basic misunderstanding of information theory here. If you reduce a large amount of information into a smaller amount by discarding some amount (ie. a hash) there will be a chance of collision directly related to the length of the data. The shorter the data, the LESS likely it will be. Now, the vast majority of the collisions will be gibberish, making them that much more likely to actually happen (you would never check in gibberish...even a binary image is somewhat structured). In the end, the chances are remote. To answer your question, yes, git will treat them as the same, changing the hash algorithm won't help, it'll take a "second check" of some sort, but ultimately, you would need as much "additional check" data as the length of the data to be 100% sure...keep in mind you would be 99.99999....to a really long number of digits.... sure with a simple check like you describe. SHA-x are cryptographically strong hashes, which means is't generally hard to intentionally create two source data sets that are both VERY SIMILAR to each other, and have the same hash. One bit of change in the data should create more than one (preferably as many as possible) bits of change in the hash output, which also means it's very difficult (but not quite impossible) to work back from the hash to the complete set of collisions, and thereby pull out the original message from that set of collisions - all but a few will be gibberish, and of the ones that aren't there's still a huge number to sift through if the message length is any significant length. The downside of a crypto hash is that they are slow to compute...in general.
So, what's it all mean then for Git? Not much. The hashes get done so rarely (relative to everything else) that their computational penalty is low overall to operations. The chances of hitting a pair of collisions is so low, it's not a realistic chance to occur and not be detected immediately (ie. your code would most likely suddenly stop building), allowing the user to fix the problem (back up a revision, and make the change again, and you'll almost certainly get a different hash because of the time change, which also feeds the hash in git). There is more likely for it to be a real problem for you if you're storing arbitrary binaries in git, which isn't really what it's primary use model is. If you want to do that...you're probably better off using a traditional database.
思考这个问题并没有错——这是一个很好的问题,很多人只是把它当作“太不可能了,不值得思考”——但实际上比这要复杂一些。如果它确实发生了,它应该很容易被检测到,它不会是正常工作流程中的无声损坏。
其他回答
git是否可以改进以适应这种情况,或者我必须更改为一个新的哈希算法?
任何哈希算法都有可能发生冲突,所以改变哈希函数并不能排除问题,它只是让它不太可能发生。所以你应该选择一个非常好的哈希函数(SHA-1已经是,但你要求不要被告知:)
如果两个文件在git中具有相同的哈希和,它会将这两个文件视为相同的。在绝对不可能发生这种情况的情况下,你可以总是返回一次提交,并更改文件中的某些内容,这样它们就不会再碰撞了……
请参阅Linus Torvalds的帖子“开始考虑sha-256?”的邮件列表。
用正确的“但是”来回答这个问题,而不解释为什么这不是一个问题是不可能的。如果没有很好地理解哈希是什么,是不可能做到这一点的。它比你在计算机科学课程中接触到的简单情况要复杂得多。
There is a basic misunderstanding of information theory here. If you reduce a large amount of information into a smaller amount by discarding some amount (ie. a hash) there will be a chance of collision directly related to the length of the data. The shorter the data, the LESS likely it will be. Now, the vast majority of the collisions will be gibberish, making them that much more likely to actually happen (you would never check in gibberish...even a binary image is somewhat structured). In the end, the chances are remote. To answer your question, yes, git will treat them as the same, changing the hash algorithm won't help, it'll take a "second check" of some sort, but ultimately, you would need as much "additional check" data as the length of the data to be 100% sure...keep in mind you would be 99.99999....to a really long number of digits.... sure with a simple check like you describe. SHA-x are cryptographically strong hashes, which means is't generally hard to intentionally create two source data sets that are both VERY SIMILAR to each other, and have the same hash. One bit of change in the data should create more than one (preferably as many as possible) bits of change in the hash output, which also means it's very difficult (but not quite impossible) to work back from the hash to the complete set of collisions, and thereby pull out the original message from that set of collisions - all but a few will be gibberish, and of the ones that aren't there's still a huge number to sift through if the message length is any significant length. The downside of a crypto hash is that they are slow to compute...in general.
So, what's it all mean then for Git? Not much. The hashes get done so rarely (relative to everything else) that their computational penalty is low overall to operations. The chances of hitting a pair of collisions is so low, it's not a realistic chance to occur and not be detected immediately (ie. your code would most likely suddenly stop building), allowing the user to fix the problem (back up a revision, and make the change again, and you'll almost certainly get a different hash because of the time change, which also feeds the hash in git). There is more likely for it to be a real problem for you if you're storing arbitrary binaries in git, which isn't really what it's primary use model is. If you want to do that...you're probably better off using a traditional database.
思考这个问题并没有错——这是一个很好的问题,很多人只是把它当作“太不可能了,不值得思考”——但实际上比这要复杂一些。如果它确实发生了,它应该很容易被检测到,它不会是正常工作流程中的无声损坏。
在10个卫星上挑选原子
SHA-1哈希是一个40十六进制字符串…也就是每个字符4比特乘以40…160位。现在我们知道10位大约是1000(确切地说是1024),这意味着有1000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000不同的SHA-1哈希值……1048.
这相当于什么?月球是由1047个原子组成的。所以如果我们有10颗卫星……你在这些卫星中随机选择一个原子…然后继续,再随机选择一个原子……那么你两次选择同一个原子的可能性,就是两次给定的git提交将有相同的SHA-1哈希的可能性。
在此基础上,我们可以问这样一个问题……
在您开始担心冲突之前,存储库中需要多少提交?
这与所谓的“生日攻击”有关,而“生日攻击”又指的是“生日悖论”或“生日问题”,即当你从给定的集合中随机挑选时,你只需要出人意料地少挑几次,就很有可能选了两次。但“少得惊人”在这里是一个相对的说法。
维基百科上有一个关于生日悖论碰撞概率的表格。没有40字符散列的条目。但是对32个字符和48个字符的条目进行插值,结果是5*1022个git提交,碰撞概率为0.1%。这是五万亿亿亿个不同的提交,或者五十个zettcommit,在你达到哪怕0.1%的碰撞几率之前。
仅这些提交的哈希值的字节和就比地球上一年产生的所有数据还要多,也就是说,您需要以比YouTube流媒体视频更快的速度大量生成代码。祝你好运。: D
这里的重点是,除非有人故意造成碰撞,否则随机发生碰撞的概率是如此之小,以至于你可以忽略这个问题
“但当碰撞发生时,实际会发生什么?”
好吧,假设不可能发生的事情确实发生了,或者假设有人设法定制了一个刻意的SHA-1哈希碰撞。然后会发生什么?
在这种情况下,有一个很好的答案,有人用它做了实验。我将引用他的回答:
If a blob already exists with the same hash, you will not get any warnings at all. Everything seems to be ok, but when you push, someone clones, or you revert, you will lose the latest version (in line with what is explained above). If a tree object already exists and you make a blob with the same hash: Everything will seem normal, until you either try to push or someone clones your repository. Then you will see that the repo is corrupt. If a commit object already exists and you make a blob with the same hash: same as #2 - corrupt If a blob already exists and you make a commit object with the same hash, it will fail when updating the "ref". If a blob already exists and you make a tree object with the same hash. It will fail when creating the commit. If a tree object already exists and you make a commit object with the same hash, it will fail when updating the "ref". If a tree object already exists and you make a tree object with the same hash, everything will seem ok. But when you commit, all of the repository will reference the wrong tree. If a commit object already exists and you make a commit object with the same hash, everything will seem ok. But when you commit, the commit will never be created, and the HEAD pointer will be moved to an old commit. If a commit object already exists and you make a tree object with the same hash, it will fail when creating the commit.
如你所见,有些情况并不好。特别是情况#2和#3会使您的存储库混乱。但是,错误似乎停留在该存储库中,而攻击或奇怪的不可能性不会传播到其他存储库。
此外,蓄意碰撞的问题似乎被认为是一个真正的威胁,因此,例如GitHub正在采取措施防止它。
哈希碰撞是如此的不可能,这完全是令人震惊的!全世界的科学家都在努力实现这一目标,但还没有成功。不过,对于某些算法,比如MD5,它们是成功的。
几率有多大?
SHA-256有2^256种可能的哈希值。大约是10^78。或者更形象地说,碰撞的可能性大约是
1:100 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000
中彩票的几率大约是1:14 Mio。与SHA-256碰撞的几率就像连续11天中彩票一样!
数学解释:14 000 000 ^ 11 ~ 2^256
此外,宇宙大约有10^80个原子。这仅仅是SHA-256组合的100倍。
MD5碰撞成功
即使是MD5,这种可能性也很小。尽管如此,数学家们还是创造了一次碰撞:
d131dd02c5e6eec4 693d9a0698aff95c 2fcab58712467eab 4004583eb8fb7f89 55ad340609f4b302 83e488832571415a 085125e8f7cdc99f d91dbdf280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e2b487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b67080a80d1e c69821bcb6a88393 96f9652b6ff72a70
MD5和
d131dd02c5e6eec4 693d9a0698aff95c 2fcab50712467eab 4004583eb8fb7f89 55ad340609f4b302 83e4888325f1415a 085125e8f7cdc99f d91dbd7280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e23487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b67080280d1e c69821bcb6a88393 96f965ab6ff72a70
这并不意味着MD5算法被破解后就不安全了。您可以故意创建MD5碰撞,但意外的MD5碰撞的几率仍然是2^128,这仍然很大。
结论
你完全不用担心碰撞。哈希算法是检查文件一致性的第二安全方法。唯一安全的方法是二进制比较。