Mmap要快得多。你可以写一个简单的基准测试来证明:
char data[0x1000];
std::ifstream in("file.bin");
while (in)
{
in.read(data, 0x1000);
// do something with data
}
对比:
const int file_size=something;
const int page_size=0x1000;
int off=0;
void *data;
int fd = open("filename.bin", O_RDONLY);
while (off < file_size)
{
data = mmap(NULL, page_size, PROT_READ, 0, fd, off);
// do stuff with data
munmap(data, page_size);
off += page_size;
}
显然,我省略了一些细节(例如,如果文件不是page_size的倍数,如何确定何时到达文件的末尾),但实际上不应该比这复杂得多。
如果可以,可以尝试将数据分解为多个文件,这些文件可以整体而不是部分地使用mmap()进行编辑(简单得多)。
几个月前,我为boost_iostreams实现了一个不成熟的滑动窗口mmap()-ed流类,但没有人关心,我忙着做其他事情。最不幸的是,几周前我删除了一个旧的未完成项目的存档,这是受害者之一:-(
更新:我还应该补充一个警告,这个基准测试在Windows中看起来会有很大不同,因为微软实现了一个漂亮的文件缓存,它首先完成了您在mmap中所做的大部分工作。例如,对于经常访问的文件,你可以执行std::ifstream.read(),它会和mmap一样快,因为文件缓存已经为你做了一个内存映射,而且它是透明的。
Final Update: Look, people: across a lot of different platform combinations of OS and standard libraries and disks and memory hierarchies, I can't say for certain that the system call mmap, viewed as a black box, will always always always be substantially faster than read. That wasn't exactly my intent, even if my words could be construed that way. Ultimately, my point was that memory-mapped i/o is generally faster than byte-based i/o; this is still true. If you find experimentally that there's no difference between the two, then the only explanation that seems reasonable to me is that your platform implements memory-mapping under the covers in a way that is advantageous to the performance of calls to read. The only way to be absolutely certain that you're using memory-mapped i/o in a portable way is to use mmap. If you don't care about portability and you can rely on the particular characteristics of your target platforms, then using read may be suitable without sacrificing measurably any performance.
编辑以清除答案列表:
@jbl:
滑动窗口mmap发出声音
有趣。你能多说一点吗
呢?
当然-我正在为Git写一个c++库(一个libgit++,如果你愿意的话),我遇到了一个类似的问题:我需要能够打开大(非常大)的文件,而不是有一个完全的性能狗(因为它将与std::fstream)。
Boost::Iostreams already has a mapped_file Source, but the problem was that it was mmapping whole files, which limits you to 2^(wordsize). On 32-bit machines, 4GB isn't big enough. It's not unreasonable to expect to have .pack files in Git that become much larger than that, so I needed to read the file in chunks without resorting to regular file i/o. Under the covers of Boost::Iostreams, I implemented a Source, which is more or less another view of the interaction between std::streambuf and std::istream. You could also try a similar approach by just inheriting std::filebuf into a mapped_filebuf and similarly, inheriting std::fstream into a mapped_fstream. It's the interaction between the two that's difficult to get right. Boost::Iostreams has some of the work done for you, and it also provides hooks for filters and chains, so I thought it would be more useful to implement it that way.