在c++动态内存分配的上下文中,我听说过几次“内存碎片”这个术语。我发现了一些关于如何处理内存碎片的问题,但找不到一个直接处理它本身的问题。所以:

什么是内存碎片? 如何判断内存碎片是否是我的应用程序的一个问题?什么样的程序最可能受到影响? 处理内存碎片的常用方法是什么?

另外:

我听说经常使用动态分配会增加内存碎片。这是真的吗?在c++环境中,我知道所有的标准容器(std::string, std::vector等)都使用动态内存分配。如果在整个程序中使用这些(特别是std::string),内存碎片更可能是一个问题吗? 在stl较多的应用程序中如何处理内存碎片?


当前回答

更新: 谷歌TCMalloc:线程缓存Malloc 已经发现它在处理长时间运行进程中的碎片方面相当出色。


我一直在开发一个服务器应用程序,它在HP-UX 11.23/11.31 ia64上存在内存碎片问题。

它是这样的。有一个进程进行内存分配和释放,并运行了几天。即使没有内存泄漏,进程的内存消耗也在不断增加。

About my experience. On HP-UX it is very easy to find memory fragmentation using HP-UX gdb. You set a break-point and when you hit it you run this command: info heap and see all memory allocations for the process and the total size of heap. Then your continue your program and then some time later your again hit the break-point. You do again info heap. If the total size of heap is bigger but the number and the size of separate allocations are the same then it is likely that you have memory allocation problems. If necessary do this check few fore times.

My way of improving the situation was this. After I had done some analysis with HP-UX gdb I saw that memory problems were caused by the fact that I used std::vector for storing some types of information from a database. std::vector requires that its data must be kept in one block. I had a few containers based on std::vector. These containers were regularly recreated. There were often situations when new records were added to the database and after that the containers were recreated. And since the recreated containers were bigger their did not fit into available blocks of free memory and the runtime asked for a new bigger block from the OS. As a result even though there were no memory leaks the memory consumption of the process grew. I improved the situation when I changed the containers. Instead of std::vector I started using std::deque which has a different way of allocating memory for data.

我知道在HP-UX上避免内存碎片的方法之一是使用小块分配器或使用MallocNextGen。在RedHat Linux上,默认的分配器似乎可以很好地处理大量小块的分配。在Windows上有低碎片堆,它解决了大量小分配的问题。

My understanding is that in an STL-heavy application you have first to identify problems. Memory allocators (like in libc) actually handle the problem of a lot of small allocations, which is typical for std::string (for instance in my server application there are lots of STL strings but as I see from running info heap they are not causing any problems). My impression is that you need to avoid frequent large allocations. Unfortunately there are situations when you can't avoid them and have to change your code. As I say in my case I improved the situation when switched to std::deque. If you identify your memory fragmention it might be possible to talk about it more precisely.

其他回答

什么是内存碎片?

内存碎片是指当您的大部分内存被分配为大量不连续的块或块时——留下很大比例的内存未分配,但在大多数典型场景下无法使用。这将导致内存溢出异常或分配错误(即malloc返回null)。

The easiest way to think about this is to imagine you have a big empty wall that you need to put pictures of varying sizes on. Each picture takes up a certain size and you obviously can't split it into smaller pieces to make it fit. You need an empty spot on the wall, the size of the picture, or else you can't put it up. Now, if you start hanging pictures on the wall and you're not careful about how you arrange them, you will soon end up with a wall that's partially covered with pictures and even though you may have empty spots most new pictures won't fit because they're larger than the available spots. You can still hang really small pictures, but most ones won't fit. So you'll have to re-arrange (compact) the ones already on the wall to make room for more..

现在,想象墙是你的(堆)内存,图片是物体。这就是内存碎片。

如何判断内存碎片是否是我的应用程序的一个问题?什么样的程序最可能受到影响?

您可能正在处理内存碎片的一个明显迹象是,如果您得到许多分配错误,特别是当已使用内存的百分比很高时(但不是您还没有使用完所有内存),那么从技术上讲,您应该有足够的空间用于您试图分配的对象。

当内存严重碎片化时,内存分配可能需要更长的时间,因为内存分配器必须做更多的工作来为新对象找到合适的空间。如果您有许多内存分配(您可能会这样做,因为您最终会产生内存碎片),分配时间甚至可能会导致明显的延迟。

处理内存碎片的常用方法是什么?

使用好的算法分配内存。不是为许多小对象分配内存,而是为这些小对象的连续数组预分配内存。有时,在分配内存时稍微浪费一点可以提高性能,并且可以省去必须处理内存碎片的麻烦。

这是一个超级简化版的傻瓜。

当对象在内存中创建时,它们被添加到内存中已使用部分的末尾。

如果一个对象不在已使用内存部分的末尾被删除,这意味着这个对象位于其他两个对象之间,它将创建一个“洞”。

这就是所谓的碎片化。

内存碎片是因为请求不同大小的内存块。考虑一个100字节的缓冲区。您请求两个字符,然后是一个整数。现在释放这两个字符,然后请求一个新的整数——但是这个整数不能容纳这两个字符的空间。该内存不能被重用,因为它不在一个足够大的连续块中,无法重新分配。除此之外,还为字符调用了大量分配器开销。

从本质上讲,在大多数系统上,内存只以一定大小的块形式存在。一旦你把这些块分开,它们不能重新连接,直到整个块被释放。这可能导致整个区块都在使用,而实际上只有一小部分区块在使用。

The primary way to reduce heap fragmentation is to make larger, less frequent allocations. In the extreme, you can use a managed heap that is capable of moving objects, at least, within your own code. This completely eliminates the problem - from a memory perspective, anyway. Obviously moving objects and such has a cost. In reality, you only really have a problem if you are allocating very small amounts off the heap often. Using contiguous containers (vector, string, etc) and allocating on the stack as much as humanly possible (always a good idea for performance) is the best way to reduce it. This also increases cache coherence, which makes your application run faster.

您应该记住的是,在一个32位x86桌面系统上,您有一个完整的2GB内存,它被分割成4KB的“页”(非常确定所有x86系统上的页大小是相同的)。您将不得不调用一些omgwtfbbq片段来解决问题。碎片确实是过去的一个问题,因为现代堆对于绝大多数应用程序来说都太大了,而且有一些流行的系统能够承受它,比如托管堆。

当分配和释放许多大小不同的对象时,最可能发生内存碎片。假设你在内存中有如下布局:

obj1 (10kb) | obj2(20kb) | obj3(5kb) | unused space (100kb)

现在,当obj2被释放时,您有120kb的未使用内存,但是您不能分配120kb的完整块,因为内存是碎片化的。

避免这种影响的常用技术包括环形缓冲区和对象池。在STL的上下文中,像std::vector::reserve()这样的方法可以提供帮助。

更新: 谷歌TCMalloc:线程缓存Malloc 已经发现它在处理长时间运行进程中的碎片方面相当出色。


我一直在开发一个服务器应用程序,它在HP-UX 11.23/11.31 ia64上存在内存碎片问题。

它是这样的。有一个进程进行内存分配和释放,并运行了几天。即使没有内存泄漏,进程的内存消耗也在不断增加。

About my experience. On HP-UX it is very easy to find memory fragmentation using HP-UX gdb. You set a break-point and when you hit it you run this command: info heap and see all memory allocations for the process and the total size of heap. Then your continue your program and then some time later your again hit the break-point. You do again info heap. If the total size of heap is bigger but the number and the size of separate allocations are the same then it is likely that you have memory allocation problems. If necessary do this check few fore times.

My way of improving the situation was this. After I had done some analysis with HP-UX gdb I saw that memory problems were caused by the fact that I used std::vector for storing some types of information from a database. std::vector requires that its data must be kept in one block. I had a few containers based on std::vector. These containers were regularly recreated. There were often situations when new records were added to the database and after that the containers were recreated. And since the recreated containers were bigger their did not fit into available blocks of free memory and the runtime asked for a new bigger block from the OS. As a result even though there were no memory leaks the memory consumption of the process grew. I improved the situation when I changed the containers. Instead of std::vector I started using std::deque which has a different way of allocating memory for data.

我知道在HP-UX上避免内存碎片的方法之一是使用小块分配器或使用MallocNextGen。在RedHat Linux上,默认的分配器似乎可以很好地处理大量小块的分配。在Windows上有低碎片堆,它解决了大量小分配的问题。

My understanding is that in an STL-heavy application you have first to identify problems. Memory allocators (like in libc) actually handle the problem of a lot of small allocations, which is typical for std::string (for instance in my server application there are lots of STL strings but as I see from running info heap they are not causing any problems). My impression is that you need to avoid frequent large allocations. Unfortunately there are situations when you can't avoid them and have to change your code. As I say in my case I improved the situation when switched to std::deque. If you identify your memory fragmention it might be possible to talk about it more precisely.