X-Git-Url: https://gerrit.fd.io/r/gitweb?a=blobdiff_plain;f=doc%2Fguides%2Fprog_guide%2Fmempool_lib.rst;h=ffdc109610b344c487d4763d695465effd6208e3;hb=50dfac14fc0f3e4a463cfc58dd84e89f4be09350;hp=5fae79abb53c9e7815c2e393054a749802c86ae7;hpb=5129044dce1f85ce4950f31bcf90f3886466f06a;p=deb_dpdk.git diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 5fae79ab..ffdc1096 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -34,13 +34,12 @@ Mempool Library =============== A memory pool is an allocator of a fixed-sized object. -In the DPDK, it is identified by name and uses a ring to store free objects. +In the DPDK, it is identified by name and uses a mempool handler to store free objects. +The default mempool handler is ring based. It provides some other optional services such as a per-core object cache and an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels. -This library is used by the -:ref:`Mbuf Library ` and the -:ref:`Environment Abstraction Layer ` (for logging history). +This library is used by the :ref:`Mbuf Library `. Cookies ------- @@ -116,7 +115,7 @@ While this may mean a number of buffers may sit idle on some core's cache, the speed at which a core can access its own cache for a specific memory pool without locks provides performance gains. The cache is composed of a small, per-core table of pointers and its length (used as a stack). -This cache can be enabled or disabled at creation of the pool. +This internal cache can be enabled or disabled at creation of the pool. The maximum size of the cache is static and is defined at compilation time (CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE). @@ -128,6 +127,39 @@ The maximum size of the cache is static and is defined at compilation time (CONF A mempool in Memory with its Associated Ring +Alternatively to the internal default per-lcore local cache, an application can create and manage external caches through the ``rte_mempool_cache_create()``, ``rte_mempool_cache_free()`` and ``rte_mempool_cache_flush()`` calls. +These user-owned caches can be explicitly passed to ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()``. +The ``rte_mempool_default_cache()`` call returns the default internal cache if any. +In contrast to the default caches, user-owned caches can be used by non-EAL threads too. + +Mempool Handlers +------------------------ + +This allows external memory subsystems, such as external hardware memory +management systems and software based memory allocators, to be used with DPDK. + +There are two aspects to a mempool handler. + +* Adding the code for your new mempool operations (ops). This is achieved by + adding a new mempool ops code, and using the ``MEMPOOL_REGISTER_OPS`` macro. + +* Using the new API to call ``rte_mempool_create_empty()`` and + ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which + ops to use. + +Several different mempool handlers may be used in the same application. A new +mempool can be created by using the ``rte_mempool_create_empty()`` function, +then using ``rte_mempool_set_ops_byname()`` to point the mempool to the +relevant mempool handler callback (ops) structure. + +Legacy applications may continue to use the old ``rte_mempool_create()`` API +call, which uses a ring based mempool handler by default. These applications +will need to be modified to use a new mempool handler. + +For applications that use ``rte_pktmbuf_create()``, there is a config setting +(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of +an alternative mempool handler. + Use Cases ---------