我怎样才能做得快呢?

当然我可以这样做:

static bool ByteArrayCompare(byte[] a1, byte[] a2)
{
    if (a1.Length != a2.Length)
        return false;

    for (int i=0; i<a1.Length; i++)
        if (a1[i]!=a2[i])
            return false;

    return true;
}

但我正在寻找一个BCL函数或一些高度优化的已证明的方法来做到这一点。

java.util.Arrays.equals((sbyte[])(Array)a1, (sbyte[])(Array)a2);

工作得很好,但这似乎不适用于x64。

注意我的快速回答。


当前回答

受到ArekBulski发布的EqualBytesLongUnrolled方法的启发,我确定了一个附加优化的解决方案。在我的实例中,数组中的数组差异往往在数组的尾部附近。在测试中,我发现当这种情况发生在大型数组中时,能够以相反的顺序比较数组元素使这种解决方案比基于memcmp的解决方案获得了巨大的性能提升。下面是解决方案:

public enum CompareDirection { Forward, Backward }

private static unsafe bool UnsafeEquals(byte[] a, byte[] b, CompareDirection direction = CompareDirection.Forward)
{
    // returns when a and b are same array or both null
    if (a == b) return true;

    // if either is null or different lengths, can't be equal
    if (a == null || b == null || a.Length != b.Length)
        return false;

    const int UNROLLED = 16;                // count of longs 'unrolled' in optimization
    int size = sizeof(long) * UNROLLED;     // 128 bytes (min size for 'unrolled' optimization)
    int len = a.Length;
    int n = len / size;         // count of full 128 byte segments
    int r = len % size;         // count of remaining 'unoptimized' bytes

    // pin the arrays and access them via pointers
    fixed (byte* pb_a = a, pb_b = b)
    {
        if (r > 0 && direction == CompareDirection.Backward)
        {
            byte* pa = pb_a + len - 1;
            byte* pb = pb_b + len - 1;
            byte* phead = pb_a + len - r;
            while(pa >= phead)
            {
                if (*pa != *pb) return false;
                pa--;
                pb--;
            }
        }

        if (n > 0)
        {
            int nOffset = n * size;
            if (direction == CompareDirection.Forward)
            {
                long* pa = (long*)pb_a;
                long* pb = (long*)pb_b;
                long* ptail = (long*)(pb_a + nOffset);
                while (pa < ptail)
                {
                    if (*(pa + 0) != *(pb + 0) || *(pa + 1) != *(pb + 1) ||
                        *(pa + 2) != *(pb + 2) || *(pa + 3) != *(pb + 3) ||
                        *(pa + 4) != *(pb + 4) || *(pa + 5) != *(pb + 5) ||
                        *(pa + 6) != *(pb + 6) || *(pa + 7) != *(pb + 7) ||
                        *(pa + 8) != *(pb + 8) || *(pa + 9) != *(pb + 9) ||
                        *(pa + 10) != *(pb + 10) || *(pa + 11) != *(pb + 11) ||
                        *(pa + 12) != *(pb + 12) || *(pa + 13) != *(pb + 13) ||
                        *(pa + 14) != *(pb + 14) || *(pa + 15) != *(pb + 15)
                    )
                    {
                        return false;
                    }
                    pa += UNROLLED;
                    pb += UNROLLED;
                }
            }
            else
            {
                long* pa = (long*)(pb_a + nOffset);
                long* pb = (long*)(pb_b + nOffset);
                long* phead = (long*)pb_a;
                while (phead < pa)
                {
                    if (*(pa - 1) != *(pb - 1) || *(pa - 2) != *(pb - 2) ||
                        *(pa - 3) != *(pb - 3) || *(pa - 4) != *(pb - 4) ||
                        *(pa - 5) != *(pb - 5) || *(pa - 6) != *(pb - 6) ||
                        *(pa - 7) != *(pb - 7) || *(pa - 8) != *(pb - 8) ||
                        *(pa - 9) != *(pb - 9) || *(pa - 10) != *(pb - 10) ||
                        *(pa - 11) != *(pb - 11) || *(pa - 12) != *(pb - 12) ||
                        *(pa - 13) != *(pb - 13) || *(pa - 14) != *(pb - 14) ||
                        *(pa - 15) != *(pb - 15) || *(pa - 16) != *(pb - 16)
                    )
                    {
                        return false;
                    }
                    pa -= UNROLLED;
                    pb -= UNROLLED;
                }
            }
        }

        if (r > 0 && direction == CompareDirection.Forward)
        {
            byte* pa = pb_a + len - r;
            byte* pb = pb_b + len - r;
            byte* ptail = pb_a + len;
            while(pa < ptail)
            {
                if (*pa != *pb) return false;
                pa++;
                pb++;
            }
        }
    }

    return true;
}

其他回答

为了比较短的字节数组,下面是一个有趣的hack:

if(myByteArray1.Length != myByteArray2.Length) return false;
if(myByteArray1.Length == 8)
   return BitConverter.ToInt64(myByteArray1, 0) == BitConverter.ToInt64(myByteArray2, 0); 
else if(myByteArray.Length == 4)
   return BitConverter.ToInt32(myByteArray2, 0) == BitConverter.ToInt32(myByteArray2, 0); 

那么,我可能会转而考虑问题中列出的解决方案。

对这段代码进行性能分析会很有趣。

. net 3.5及更新版本有一个新的公共类型System.Data.Linq.Binary,它封装了byte[]。它实现了IEquatable<Binary>,(实际上)比较两个字节数组。注意System.Data.Linq.Binary也有来自byte[]的隐式转换运算符。

MSDN文档:System.Data.Linq.Binary

Equals方法的反射器反编译:

private bool EqualsTo(Binary binary)
{
    if (this != binary)
    {
        if (binary == null)
        {
            return false;
        }
        if (this.bytes.Length != binary.bytes.Length)
        {
            return false;
        }
        if (this.hashCode != binary.hashCode)
        {
            return false;
        }
        int index = 0;
        int length = this.bytes.Length;
        while (index < length)
        {
            if (this.bytes[index] != binary.bytes[index])
            {
                return false;
            }
            index++;
        }
    }
    return true;
}

有趣的是,只有当两个Binary对象的哈希值相同时,它们才会进行逐字节比较循环。然而,这是以在二进制对象的构造函数中计算哈希值为代价的(通过使用for loop:-)遍历数组)。

上述实现意味着,在最坏的情况下,您可能必须遍历数组三次:首先计算array1的哈希值,然后计算array2的哈希值,最后(因为这是最坏的情况,长度和哈希值相等)比较array1中的字节和数组2中的字节。

总的来说,即使System.Data.Linq.Binary被内置到BCL中,我不认为这是比较两个字节数组的最快方法:-|。

我想到了许多显卡内置的块传输加速方法。但是这样你就必须按字节复制所有的数据,所以如果你不想在非托管和依赖硬件的代码中实现你的整个逻辑,这对你没有多大帮助……

Another way of optimization similar to the approach shown above would be to store as much of your data as possible in a long[] rather than a byte[] right from the start, for example if you are reading it sequentially from a binary file, or if you use a memory mapped file, read in data as long[] or single long values. Then, your comparison loop will only need 1/8th of the number of iterations it would have to do for a byte[] containing the same amount of data. It is a matter of when and how often you need to compare vs. when and how often you need to access the data in a byte-by-byte manner, e.g. to use it in an API call as a parameter in a method that expects a byte[]. In the end, you only can tell if you really know the use case...

Span<T>提供了一个极具竞争力的替代方案,而不必在您自己的应用程序的代码库中添加令人困惑和/或不可移植的错误:

// byte[] is implicitly convertible to ReadOnlySpan<byte>
static bool ByteArrayCompare(ReadOnlySpan<byte> a1, ReadOnlySpan<byte> a2)
{
    return a1.SequenceEqual(a2);
}

. net 6.0.4的实现可以在这里找到。

我已经修改了@EliArbel的要点,将这个方法添加为SpansEqual,在其他人的基准测试中删除大多数不太有趣的性能,使用不同的数组大小运行它,输出图形,并将SpansEqual标记为基线,以便它报告不同的方法与SpansEqual相比如何。

以下数字来自结果,经过轻微编辑以删除“错误”一栏。

|        Method |  ByteCount |               Mean |          StdDev | Ratio | RatioSD |
|-------------- |----------- |-------------------:|----------------:|------:|--------:|
|    SpansEqual |         15 |           2.074 ns |       0.0233 ns |  1.00 |    0.00 |
|  LongPointers |         15 |           2.854 ns |       0.0632 ns |  1.38 |    0.03 |
|      Unrolled |         15 |          12.449 ns |       0.2487 ns |  6.00 |    0.13 |
| PInvokeMemcmp |         15 |           7.525 ns |       0.1057 ns |  3.63 |    0.06 |
|               |            |                    |                 |       |         |
|    SpansEqual |       1026 |          15.629 ns |       0.1712 ns |  1.00 |    0.00 |
|  LongPointers |       1026 |          46.487 ns |       0.2938 ns |  2.98 |    0.04 |
|      Unrolled |       1026 |          23.786 ns |       0.1044 ns |  1.52 |    0.02 |
| PInvokeMemcmp |       1026 |          28.299 ns |       0.2781 ns |  1.81 |    0.03 |
|               |            |                    |                 |       |         |
|    SpansEqual |    1048585 |      17,920.329 ns |     153.0750 ns |  1.00 |    0.00 |
|  LongPointers |    1048585 |      42,077.448 ns |     309.9067 ns |  2.35 |    0.02 |
|      Unrolled |    1048585 |      29,084.901 ns |     428.8496 ns |  1.62 |    0.03 |
| PInvokeMemcmp |    1048585 |      30,847.572 ns |     213.3162 ns |  1.72 |    0.02 |
|               |            |                    |                 |       |         |
|    SpansEqual | 2147483591 | 124,752,376.667 ns | 552,281.0202 ns |  1.00 |    0.00 |
|  LongPointers | 2147483591 | 139,477,269.231 ns | 331,458.5429 ns |  1.12 |    0.00 |
|      Unrolled | 2147483591 | 137,617,423.077 ns | 238,349.5093 ns |  1.10 |    0.00 |
| PInvokeMemcmp | 2147483591 | 138,373,253.846 ns | 288,447.8278 ns |  1.11 |    0.01 |

我很惊讶地看到SpansEqual没有在max-array-size方法中名列前茅,但差异是如此之小,我认为这不会有什么影响。在更新到。net 6.0.4和我的新硬件上运行后,SpansEqual现在在所有数组大小上都轻松优于其他所有数组。

我的系统信息:

BenchmarkDotNet=v0.13.1, OS=Windows 10.0.22000
AMD Ryzen 9 5900X, 1 CPU, 24 logical and 12 physical cores
.NET SDK=6.0.202
  [Host]     : .NET 6.0.4 (6.0.422.16404), X64 RyuJIT
  DefaultJob : .NET 6.0.4 (6.0.422.16404), X64 RyuJIT

你可以使用Enumerable。SequenceEqual方法。

using System;
using System.Linq;
...
var a1 = new int[] { 1, 2, 3};
var a2 = new int[] { 1, 2, 3};
var a3 = new int[] { 1, 2, 4};
var x = a1.SequenceEqual(a2); // true
var y = a1.SequenceEqual(a3); // false

如果你因为某些原因不能使用. net 3.5,你的方法是可以的。 编译器运行时环境会优化你的循环,所以你不需要担心性能。