Monday, 14 April 2014

Memory Pool

       Memory pools, also called fixed-size-blocks allocation, is the use of a pool for memory management. This allow dynamic memory allocation comparable to malloc or C++'s operator new. As those implementations suffer from fragmentation because of variable block sizes, it is not recommendable to use them in a real time system due to performance. A more efficient solution is preallocate a number of memory blocks with the same size called the memory pool. The application can allocate, access, and free blocks represented by handles at run time.

      Many real-time operating systems use memory pools, such as the Transaction Processing Facility.

      In short, memory pool is a memory block which you got from system and use some unit of it to replace the system call malloc/free and new/delete. The advantage of the technology is reuse existing memory block so that reduce the times of system call. 


       A simple memory pool module can allocate, for example, three pools at compile time with block sizes optimized for the application deploying the module. The application can allocate, access and free memory through the following interface:
  • Allocate memory from the pools. The function will determine the pool where the required block fits in. If all blocks of that pool are already reserved, the function tries to find one in the next bigger pool(s). An allocated memory block is represented with a handle.
  • Get an access pointer to the allocated memory.
  • Free the formerly allocated memory block.
  • The handle can for example be implemented with an unsigned int. The module can interpret the handle internally by dividing it into pool index, memory block index and a version. The pool and memory block index allow fast access to the corresponding block with the handle, while the version, which is incremented at each new allocation, allows detection of handles whose memory block is already freed (caused by handles retained too long).


No comments:

Post a Comment