我有一个网络服务器,它将读取大二进制文件(几兆字节)到字节数组。服务器可能同时读取多个文件(不同的页面请求),因此我正在寻找一种最优化的方式来执行此操作,而不会对CPU造成太多负担。下面的代码足够好吗?
public byte[] FileToByteArray(string fileName)
{
byte[] buff = null;
FileStream fs = new FileStream(fileName,
FileMode.Open,
FileAccess.Read);
BinaryReader br = new BinaryReader(fs);
long numBytes = new FileInfo(fileName).Length;
buff = br.ReadBytes((int) numBytes);
return buff;
}
我可能会说,这里的答案通常是“不”。除非你绝对需要一次性获得所有数据,否则可以考虑使用基于流的API(或者reader / iterator的一些变体)。当您有多个并行操作(正如问题所建议的)以最小化系统负载和最大化吞吐量时,这一点尤其重要。
例如,如果您正在向调用者传输数据:
Stream dest = ...
using(Stream source = File.OpenRead(path)) {
byte[] buffer = new byte[2048];
int bytesRead;
while((bytesRead = source.Read(buffer, 0, buffer.Length)) > 0) {
dest.Write(buffer, 0, bytesRead);
}
}
Depending on the frequency of operations, the size of the files, and the number of files you're looking at, there are other performance issues to take into consideration. One thing to remember, is that each of your byte arrays will be released at the mercy of the garbage collector. If you're not caching any of that data, you could end up creating a lot of garbage and be losing most of your performance to % Time in GC. If the chunks are larger than 85K, you'll be allocating to the Large Object Heap(LOH) which will require a collection of all generations to free up (this is very expensive, and on a server will stop all execution while it's going on). Additionally, if you have a ton of objects on the LOH, you can end up with LOH fragmentation (the LOH is never compacted) which leads to poor performance and out of memory exceptions. You can recycle the process once you hit a certain point, but I don't know if that's a best practice.
关键是,在以最快的方式将所有字节读入内存之前,你应该考虑应用程序的整个生命周期,否则你可能会以整体性能换取短期性能。