在c++动态内存分配的上下文中,我听说过几次“内存碎片”这个术语。我发现了一些关于如何处理内存碎片的问题,但找不到一个直接处理它本身的问题。所以:
什么是内存碎片?
如何判断内存碎片是否是我的应用程序的一个问题?什么样的程序最可能受到影响?
处理内存碎片的常用方法是什么?
另外:
我听说经常使用动态分配会增加内存碎片。这是真的吗?在c++环境中,我知道所有的标准容器(std::string, std::vector等)都使用动态内存分配。如果在整个程序中使用这些(特别是std::string),内存碎片更可能是一个问题吗?
在stl较多的应用程序中如何处理内存碎片?
更新:
谷歌TCMalloc:线程缓存Malloc
已经发现它在处理长时间运行进程中的碎片方面相当出色。
我一直在开发一个服务器应用程序,它在HP-UX 11.23/11.31 ia64上存在内存碎片问题。
它是这样的。有一个进程进行内存分配和释放,并运行了几天。即使没有内存泄漏,进程的内存消耗也在不断增加。
About my experience. On HP-UX it is very easy to find memory fragmentation using HP-UX gdb. You set a break-point and when you hit it you run this command: info heap and see all memory allocations for the process and the total size of heap. Then your continue your program and then some time later your again hit the break-point. You do again info heap. If the total size of heap is bigger but the number and the size of separate allocations are the same then it is likely that you have memory allocation problems. If necessary do this check few fore times.
My way of improving the situation was this. After I had done some analysis with HP-UX gdb I saw that memory problems were caused by the fact that I used std::vector for storing some types of information from a database. std::vector requires that its data must be kept in one block. I had a few containers based on std::vector. These containers were regularly recreated. There were often situations when new records were added to the database and after that the containers were recreated. And since the recreated containers were bigger their did not fit into available blocks of free memory and the runtime asked for a new bigger block from the OS. As a result even though there were no memory leaks the memory consumption of the process grew. I improved the situation when I changed the containers. Instead of std::vector I started using std::deque which has a different way of allocating memory for data.
我知道在HP-UX上避免内存碎片的方法之一是使用小块分配器或使用MallocNextGen。在RedHat Linux上,默认的分配器似乎可以很好地处理大量小块的分配。在Windows上有低碎片堆,它解决了大量小分配的问题。
My understanding is that in an STL-heavy application you have first to identify problems. Memory allocators (like in libc) actually handle the problem of a lot of small allocations, which is typical for std::string (for instance in my server application there are lots of STL strings but as I see from running info heap they are not causing any problems). My impression is that you need to avoid frequent large allocations. Unfortunately there are situations when you can't avoid them and have to change your code. As I say in my case I improved the situation when switched to std::deque. If you identify your memory fragmention it might be possible to talk about it more precisely.
什么是内存碎片?
内存碎片是指当您的大部分内存被分配为大量不连续的块或块时——留下很大比例的内存未分配,但在大多数典型场景下无法使用。这将导致内存溢出异常或分配错误(即malloc返回null)。
The easiest way to think about this is to imagine you have a big empty wall that you need to put pictures of varying sizes on. Each picture takes up a certain size and you obviously can't split it into smaller pieces to make it fit. You need an empty spot on the wall, the size of the picture, or else you can't put it up. Now, if you start hanging pictures on the wall and you're not careful about how you arrange them, you will soon end up with a wall that's partially covered with pictures and even though you may have empty spots most new pictures won't fit because they're larger than the available spots. You can still hang really small pictures, but most ones won't fit. So you'll have to re-arrange (compact) the ones already on the wall to make room for more..
现在,想象墙是你的(堆)内存,图片是物体。这就是内存碎片。
如何判断内存碎片是否是我的应用程序的一个问题?什么样的程序最可能受到影响?
您可能正在处理内存碎片的一个明显迹象是,如果您得到许多分配错误,特别是当已使用内存的百分比很高时(但不是您还没有使用完所有内存),那么从技术上讲,您应该有足够的空间用于您试图分配的对象。
当内存严重碎片化时,内存分配可能需要更长的时间,因为内存分配器必须做更多的工作来为新对象找到合适的空间。如果您有许多内存分配(您可能会这样做,因为您最终会产生内存碎片),分配时间甚至可能会导致明显的延迟。
处理内存碎片的常用方法是什么?
使用好的算法分配内存。不是为许多小对象分配内存,而是为这些小对象的连续数组预分配内存。有时,在分配内存时稍微浪费一点可以提高性能,并且可以省去必须处理内存碎片的麻烦。
什么是内存碎片?
When your app uses dynamic memory, it allocates and frees chunks of memory. In the beginning, the whole memory space of your app is one contiguous block of free memory. However, when you allocate and free blocks of different size, the memory starts to get fragmented, i.e. instead of a big contiguous free block and a number of contiguous allocated blocks, there will be a allocated and free blocks mixed up. Since the free blocks have limited size, it is difficult to reuse them. E.g. you may have 1000 bytes of free memory, but can't allocate memory for a 100 byte block, because all the free blocks are at most 50 bytes long.
另一个不可避免但问题较少的碎片来源是,在大多数架构中,内存地址必须对齐到2,4,8等字节边界(即地址必须是2,4,8的倍数等)这意味着,即使你有一个包含3个char字段的结构,你的结构可能有12而不是3,因为每个字段都对齐到4字节边界。
如何判断内存碎片是否是我的应用程序的一个问题?什么样的程序最可能受到影响?
最明显的答案是内存不足异常。
显然,在c++应用程序中,没有一种好的便携式方法来检测内存碎片。更多细节请看这个答案。
处理内存碎片的常用方法是什么?
这在c++中很困难,因为你在指针中使用直接内存地址,你无法控制谁引用特定的内存地址。因此,重新安排已分配的内存块(Java垃圾收集器的方式)是不可取的。
自定义分配器可以通过在较大内存块中管理小对象的分配,并重用该块中的空闲插槽来提供帮助。
更新:
谷歌TCMalloc:线程缓存Malloc
已经发现它在处理长时间运行进程中的碎片方面相当出色。
我一直在开发一个服务器应用程序,它在HP-UX 11.23/11.31 ia64上存在内存碎片问题。
它是这样的。有一个进程进行内存分配和释放,并运行了几天。即使没有内存泄漏,进程的内存消耗也在不断增加。
About my experience. On HP-UX it is very easy to find memory fragmentation using HP-UX gdb. You set a break-point and when you hit it you run this command: info heap and see all memory allocations for the process and the total size of heap. Then your continue your program and then some time later your again hit the break-point. You do again info heap. If the total size of heap is bigger but the number and the size of separate allocations are the same then it is likely that you have memory allocation problems. If necessary do this check few fore times.
My way of improving the situation was this. After I had done some analysis with HP-UX gdb I saw that memory problems were caused by the fact that I used std::vector for storing some types of information from a database. std::vector requires that its data must be kept in one block. I had a few containers based on std::vector. These containers were regularly recreated. There were often situations when new records were added to the database and after that the containers were recreated. And since the recreated containers were bigger their did not fit into available blocks of free memory and the runtime asked for a new bigger block from the OS. As a result even though there were no memory leaks the memory consumption of the process grew. I improved the situation when I changed the containers. Instead of std::vector I started using std::deque which has a different way of allocating memory for data.
我知道在HP-UX上避免内存碎片的方法之一是使用小块分配器或使用MallocNextGen。在RedHat Linux上,默认的分配器似乎可以很好地处理大量小块的分配。在Windows上有低碎片堆,它解决了大量小分配的问题。
My understanding is that in an STL-heavy application you have first to identify problems. Memory allocators (like in libc) actually handle the problem of a lot of small allocations, which is typical for std::string (for instance in my server application there are lots of STL strings but as I see from running info heap they are not causing any problems). My impression is that you need to avoid frequent large allocations. Unfortunately there are situations when you can't avoid them and have to change your code. As I say in my case I improved the situation when switched to std::deque. If you identify your memory fragmention it might be possible to talk about it more precisely.