POSIX环境提供了至少两种访问文件的方法。有标准的系统调用open()、read()、write()和friends,但也有使用mmap()将文件映射到虚拟内存的选项。
什么时候使用一种比另一种更可取?它们各自的优势是什么?
POSIX环境提供了至少两种访问文件的方法。有标准的系统调用open()、read()、write()和friends,但也有使用mmap()将文件映射到虚拟内存的选项。
什么时候使用一种比另一种更可取?它们各自的优势是什么?
当前回答
与传统IO相比,内存映射具有巨大的速度优势。它允许操作系统在触及内存映射文件中的页面时从源文件读取数据。这是通过创建故障页面来实现的,操作系统检测到故障页面,然后操作系统自动从文件中加载相应的数据。
这与分页机制的工作方式相同,通常通过读取系统页面边界和大小(通常是4K)上的数据来优化高速I/O——大多数文件系统缓存都优化到4K大小。
其他回答
这里没有列出的一个优点是mmap()能够将只读映射保持为干净的页面。如果在进程的地址空间中分配了一个缓冲区,然后使用read()从文件中填充缓冲区,那么与该缓冲区对应的内存页现在是脏的,因为它们已经被写入。
脏页不能被内核从RAM中删除。如果有交换空间,则可以将它们换出交换。但这是昂贵的,在一些系统上,例如只有闪存的小型嵌入式设备,根本没有交换。在这种情况下,缓冲区将被卡在RAM中,直到进程退出,或者通过madvise()将其返回。
未写入mmap()的页面是干净的。如果内核需要RAM,它可以简单地丢弃它们并使用页面所在的RAM。如果拥有映射的进程再次访问它,就会导致页面错误,内核会从它们最初产生的文件中重新加载页面。和他们最初居住的地方一样。
这并不需要一个以上的进程使用映射文件。
除了其他不错的答案,谷歌的专家Robert Love引用了一段Linux系统编程:
Advantages of mmap( ) Manipulating files via mmap( ) has a handful of advantages over the standard read( ) and write( ) system calls. Among them are: Reading from and writing to a memory-mapped file avoids the extraneous copy that occurs when using the read( ) or write( ) system calls, where the data must be copied to and from a user-space buffer. Aside from any potential page faults, reading from and writing to a memory-mapped file does not incur any system call or context switch overhead. It is as simple as accessing memory. When multiple processes map the same object into memory, the data is shared among all the processes. Read-only and shared writable mappings are shared in their entirety; private writable mappings have their not-yet-COW (copy-on-write) pages shared. Seeking around the mapping involves trivial pointer manipulations. There is no need for the lseek( ) system call. For these reasons, mmap( ) is a smart choice for many applications. Disadvantages of mmap( ) There are a few points to keep in mind when using mmap( ): Memory mappings are always an integer number of pages in size. Thus, the difference between the size of the backing file and an integer number of pages is "wasted" as slack space. For small files, a significant percentage of the mapping may be wasted. For example, with 4 KB pages, a 7 byte mapping wastes 4,089 bytes. The memory mappings must fit into the process' address space. With a 32-bit address space, a very large number of various-sized mappings can result in fragmentation of the address space, making it hard to find large free contiguous regions. This problem, of course, is much less apparent with a 64-bit address space. There is overhead in creating and maintaining the memory mappings and associated data structures inside the kernel. This overhead is generally obviated by the elimination of the double copy mentioned in the previous section, particularly for larger and frequently accessed files. For these reasons, the benefits of mmap( ) are most greatly realized when the mapped file is large (and thus any wasted space is a small percentage of the total mapping), or when the total size of the mapped file is evenly divisible by the page size (and thus there is no wasted space).
mmap has the advantage when you have random access on big files. Another advantage is that you access it with memory operations (memcpy, pointer arithmetic), without bothering with the buffering. Normal I/O can sometimes be quite difficult when using buffers when you have structures bigger than your buffer. The code to handle that is often difficult to get right, mmap is generally easier. This said, there are certain traps when working with mmap. As people have already mentioned, mmap is quite costly to set up, so it is worth using only for a given size (varying from machine to machine).
对于对文件的纯顺序访问,它也不总是更好的解决方案,尽管适当地调用madvise可以缓解问题。
您必须小心架构的对齐限制(SPARC, itanium),对于读/写IO,缓冲区通常是正确对齐的,并且在取消引用强制转换的指针时不会捕获。
您还必须注意不要访问映射之外的地方。如果您在映射上使用字符串函数,并且文件末尾不包含\0,则很容易发生这种情况。当您的文件大小不是页面大小的倍数时,它将在大多数情况下工作,因为最后一页被填充为0(映射区域的大小始终是页面大小的倍数)。
我发现mmap()在读取小文件(小于16K)时没有优势。与仅执行单个read()系统调用相比,读取整个文件的页面故障开销非常高。这是因为内核有时可以完全满足您的时间片中的读取,这意味着您的代码不会切换。如果出现页面错误,则更有可能安排另一个程序,从而使文件操作具有更高的延迟。
与传统IO相比,内存映射具有巨大的速度优势。它允许操作系统在触及内存映射文件中的页面时从源文件读取数据。这是通过创建故障页面来实现的,操作系统检测到故障页面,然后操作系统自动从文件中加载相应的数据。
这与分页机制的工作方式相同,通常通过读取系统页面边界和大小(通常是4K)上的数据来优化高速I/O——大多数文件系统缓存都优化到4K大小。