我有一个网络服务器,它将读取大二进制文件(几兆字节)到字节数组。服务器可能同时读取多个文件(不同的页面请求),因此我正在寻找一种最优化的方式来执行此操作,而不会对CPU造成太多负担。下面的代码足够好吗?

public byte[] FileToByteArray(string fileName)
{
    byte[] buff = null;
    FileStream fs = new FileStream(fileName, 
                                   FileMode.Open, 
                                   FileAccess.Read);
    BinaryReader br = new BinaryReader(fs);
    long numBytes = new FileInfo(fileName).Length;
    buff = br.ReadBytes((int) numBytes);
    return buff;
}

当前回答

概述:如果您的图像被添加为action= embedded资源,则使用GetExecutingAssembly检索jpg资源到流中,然后将流中的二进制数据读入字节数组

   public byte[] GetAImage()
    {
        byte[] bytes=null;
        var assembly = Assembly.GetExecutingAssembly();
        var resourceName = "MYWebApi.Images.X_my_image.jpg";

        using (Stream stream = assembly.GetManifestResourceStream(resourceName))
        {
            bytes = new byte[stream.Length];
            stream.Read(bytes, 0, (int)stream.Length);
        }
        return bytes;

    }

其他回答

Depending on the frequency of operations, the size of the files, and the number of files you're looking at, there are other performance issues to take into consideration. One thing to remember, is that each of your byte arrays will be released at the mercy of the garbage collector. If you're not caching any of that data, you could end up creating a lot of garbage and be losing most of your performance to % Time in GC. If the chunks are larger than 85K, you'll be allocating to the Large Object Heap(LOH) which will require a collection of all generations to free up (this is very expensive, and on a server will stop all execution while it's going on). Additionally, if you have a ton of objects on the LOH, you can end up with LOH fragmentation (the LOH is never compacted) which leads to poor performance and out of memory exceptions. You can recycle the process once you hit a certain point, but I don't know if that's a best practice.

关键是,在以最快的方式将所有字节读入内存之前,你应该考虑应用程序的整个生命周期,否则你可能会以整体性能换取短期性能。

使用c#中的BufferedStream类来提高性能。缓冲区是内存中用于缓存数据的字节块,从而减少对操作系统的调用次数。缓冲区可以提高读写性能。

请参阅下面的代码示例和其他解释: http://msdn.microsoft.com/en-us/library/system.io.bufferedstream.aspx

你的代码可以分解成这个(代替File.ReadAllBytes):

public byte[] ReadAllBytes(string fileName)
{
    byte[] buffer = null;
    using (FileStream fs = new FileStream(fileName, FileMode.Open, FileAccess.Read))
    {
        buffer = new byte[fs.Length];
        fs.Read(buffer, 0, (int)fs.Length);
    }
    return buffer;
} 

注意整数。MaxValue -由Read方法设置的文件大小限制。换句话说,一次只能读取2GB的数据块。

还要注意FileStream的最后一个参数是缓冲区大小。

我还建议阅读FileStream和BufferedStream。

一如既往,一个简单的示例程序来分析哪个是最快的将是最有益的。

此外,底层硬件对性能也有很大影响。您是否使用基于服务器的具有大缓存的硬盘驱动器和带有板载内存缓存的RAID卡?还是使用连接到IDE端口的标准驱动器?

我建议尝试Response.TransferFile()方法,然后使用Response.Flush()和Response.End()来提供大文件。

如果您正在处理大于2 GB的文件,您会发现上述方法失败。

直接将流交给MD5并允许它为你分块文件要容易得多:

private byte[] computeFileHash(string filename)
{
    MD5 md5 = MD5.Create();
    using (FileStream fs = new FileStream(filename, FileMode.Open))
    {
        byte[] hash = md5.ComputeHash(fs);
        return hash;
    }
}