抛弃std::allocator以支持自定义解决方案的一些真正好的理由是什么?您是否遇到过这样的情况:它对于正确性、性能、可伸缩性等来说是绝对必要的?有什么聪明的例子吗?

自定义分配器一直是标准库的一个特性,但我并不太需要它。我只是想知道是否有人能提供一些令人信服的例子来证明他们的存在。


当前回答

我正在使用一个MySQL存储引擎,它的代码使用c++。我们使用一个自定义分配器来使用MySQL内存系统,而不是与MySQL竞争内存。它允许我们确保我们使用的内存是用户配置MySQL使用的内存,而不是“额外的”。

其他回答

我正在研究一个mmap-分配器,它允许向量使用内存 内存映射文件。我们的目标是让向量使用这样的存储 直接在由mmap映射的虚拟内存中。我们的问题是 提高真正大的文件(>10GB)的读取到内存,而不复制 开销,因此我需要这个自定义分配器。

到目前为止,我已经有了一个自定义分配器的骨架 (它来源于std::allocator),我认为这是一个很好的开始 指向写自己的分配器。请随意使用这段代码 以任何你想要的方式:

#include <memory>
#include <stdio.h>

namespace mmap_allocator_namespace
{
        // See StackOverflow replies to this answer for important commentary about inheriting from std::allocator before replicating this code.
        template <typename T>
        class mmap_allocator: public std::allocator<T>
        {
public:
                typedef size_t size_type;
                typedef T* pointer;
                typedef const T* const_pointer;

                template<typename _Tp1>
                struct rebind
                {
                        typedef mmap_allocator<_Tp1> other;
                };

                pointer allocate(size_type n, const void *hint=0)
                {
                        fprintf(stderr, "Alloc %d bytes.\n", n*sizeof(T));
                        return std::allocator<T>::allocate(n, hint);
                }

                void deallocate(pointer p, size_type n)
                {
                        fprintf(stderr, "Dealloc %d bytes (%p).\n", n*sizeof(T), p);
                        return std::allocator<T>::deallocate(p, n);
                }

                mmap_allocator() throw(): std::allocator<T>() { fprintf(stderr, "Hello allocator!\n"); }
                mmap_allocator(const mmap_allocator &a) throw(): std::allocator<T>(a) { }
                template <class U>                    
                mmap_allocator(const mmap_allocator<U> &a) throw(): std::allocator<T>(a) { }
                ~mmap_allocator() throw() { }
        };
}

为了使用它,像下面这样声明一个STL容器:

using namespace std;
using namespace mmap_allocator_namespace;

vector<int, mmap_allocator<int> > int_vec(1024, 0, mmap_allocator<int>());

例如,每当分配内存时,就可以使用它记录日志。什么是必要的 是重新绑定结构,否则向量容器使用超类分配/释放 方法。

更新:内存映射分配器现在可以在https://github.com/johannesthoma/mmap_allocator上获得,并且是LGPL。您可以在项目中使用它。

我正在使用一个自定义分配器来计算程序的一部分中的分配/释放的数量,并测量它需要多长时间。还有其他方法可以达到这个目的,但这个方法对我来说非常方便。特别有用的是,我只能对容器的一个子集使用自定义分配器。

我正在使用一个MySQL存储引擎,它的代码使用c++。我们使用一个自定义分配器来使用MySQL内存系统,而不是与MySQL竞争内存。它允许我们确保我们使用的内存是用户配置MySQL使用的内存,而不是“额外的”。

当使用gpu或其他协处理器时,以特殊的方式在主存中分配数据结构有时是有益的。这种特殊的内存分配方式可以在自定义分配器中以一种方便的方式实现。

在使用加速器时,通过加速器运行时进行自定义分配是有益的,原因如下:

through custom allocation the accelerator runtime or driver is notified of the memory block in addition the operating system can make sure that the allocated block of memory is page-locked (some call this pinned memory), that is, the virtual memory subsystem of the operating system may not move or remove the page within or from memory if 1. and 2. hold and a data transfer between a page-locked memory block and an accelerator is requested, the runtime can directly access the data in main memory since it knows where it is and it can be sure the operating system did not move/remove it this saves one memory copy that would occur with memory that was allocated in a non-page-locked way: the data has to be copied in main memory to a page-locked staging area from with the accelerator can initialize the data transfer (through DMA)

One example of I time I have used these was working with very resource constrained embedded systems. Lets say you have 2k of ram free and your program has to use some of that memory. You need to store say 4-5 sequences somewhere that's not on the stack and additionally you need to have very precise access over where these things get stored, this is a situation where you might want to write your own allocator. The default implementations can fragment the memory, this might be unacceptable if you don't have enough memory and cannot restart your program.

One project I was working on was using AVR-GCC on some low powered chips. We had to store 8 sequences of variable length but with a known maximum. The standard library implementation of the memory management is a thin wrapper around malloc/free which keeps track of where to place items with by prepending every allocated block of memory with a pointer to just past the end of that allocated piece of memory. When allocating a new piece of memory the standard allocator has to walk over each of the pieces of memory to find the next block that is available where the requested size of memory will fit. On a desktop platform this would be very fast for this few items but you have to keep in mind that some of these microcontrollers are very slow and primitive in comparison. Additionally the memory fragmentation issue was a massive problem that meant we really had no choice but to take a different approach.

So what we did was to implement our own memory pool. Each block of memory was big enough to fit the largest sequence we would need in it. This allocated fixed sized blocks of memory ahead of time and marked which blocks of memory were currently in use. We did this by keeping one 8 bit integer where each bit represented if a certain block was used. We traded off memory usage here for attempting to make the whole process faster, which in our case was justified as we were pushing this microcontroller chip close to it's maximum processing capacity.

在嵌入式系统上下文中,我还可以看到编写自己的自定义分配器的其他情况,例如,如果序列的内存不在主ram中,而在这些平台上可能经常出现这种情况。