kernel: Deprecate k_mem_pool APIs
Mark all k_mem_pool APIs deprecated for future code. Remaining internal usage now uses equivalent "z_mem_pool" symbols instead. Fixes #24358 Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit is contained in:
parent
27b1394331
commit
6965cf526d
31 changed files with 116 additions and 410 deletions
|
@ -92,7 +92,6 @@ These pages cover memory allocation and management services.
|
|||
|
||||
memory/heap.rst
|
||||
memory/slabs.rst
|
||||
memory/pools.rst
|
||||
|
||||
Timing
|
||||
******
|
||||
|
|
|
@ -1,217 +0,0 @@
|
|||
.. _memory_pools_v2:
|
||||
|
||||
Memory Pools
|
||||
############
|
||||
|
||||
.. note::
|
||||
|
||||
The :c:type:`k_mem_pool` data structure defined here has been deprecated
|
||||
in current Zephyr code. It still exists for applications which
|
||||
require the specific memory allocation and alignment patterns
|
||||
detailed below, but the default heap implementation (including the
|
||||
default backend to the k_mem_pool APIs) is now a :c:struct:`k_heap`
|
||||
allocator, which is a better choice for general purpose
|
||||
code.
|
||||
|
||||
A :dfn:`memory pool` is a kernel object that allows memory blocks
|
||||
to be dynamically allocated from a designated memory region.
|
||||
The memory blocks in a memory pool can be of any size,
|
||||
thereby reducing the amount of wasted memory when an application
|
||||
needs to allocate storage for data structures of different sizes.
|
||||
The memory pool uses a "buddy memory allocation" algorithm
|
||||
to efficiently partition larger blocks into smaller ones,
|
||||
allowing blocks of different sizes to be allocated and released efficiently
|
||||
while limiting memory fragmentation concerns.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 2
|
||||
|
||||
Concepts
|
||||
********
|
||||
|
||||
Any number of memory pools can be defined (limited only by available RAM). Each
|
||||
memory pool is referenced by its memory address.
|
||||
|
||||
A memory pool has the following key properties:
|
||||
|
||||
* A **minimum block size**, measured in bytes.
|
||||
It must be at least 4X bytes long, where X is greater than 0.
|
||||
|
||||
* A **maximum block size**, measured in bytes.
|
||||
This should be a power of 4 times larger than the minimum block size.
|
||||
That is, "maximum block size" must equal "minimum block size" times 4^Y,
|
||||
where Y is greater than or equal to zero.
|
||||
|
||||
* The **number of maximum-size blocks** initially available.
|
||||
This must be greater than zero.
|
||||
|
||||
* A **buffer** that provides the memory for the memory pool's blocks.
|
||||
This must be at least "maximum block size" times
|
||||
"number of maximum-size blocks" bytes long.
|
||||
|
||||
The memory pool's buffer must be aligned to an N-byte boundary, where
|
||||
N is a power of 2 larger than 2 (i.e. 4, 8, 16, ...). To ensure that
|
||||
all memory blocks in the buffer are similarly aligned to this boundary,
|
||||
the minimum block size must also be a multiple of N.
|
||||
|
||||
A thread that needs to use a memory block simply allocates it from a memory
|
||||
pool. Following a successful allocation, the :c:data:`data` field
|
||||
of the block descriptor supplied by the thread indicates the starting address
|
||||
of the memory block. When the thread is finished with a memory block,
|
||||
it must release the block back to the memory pool so the block can be reused.
|
||||
|
||||
If a block of the desired size is unavailable, a thread can optionally wait
|
||||
for one to become available.
|
||||
Any number of threads may wait on a memory pool simultaneously;
|
||||
when a suitable memory block becomes available, it is given to
|
||||
the highest-priority thread that has waited the longest.
|
||||
|
||||
Unlike a heap, more than one memory pool can be defined, if needed. For
|
||||
example, different applications can utilize different memory pools; this
|
||||
can help prevent one application from hijacking resources to allocate all
|
||||
of the available blocks.
|
||||
|
||||
Internal Operation
|
||||
==================
|
||||
|
||||
A memory pool's buffer is an array of maximum-size blocks,
|
||||
with no wasted space between the blocks.
|
||||
Each of these "level 0" blocks is a *quad-block* that can be
|
||||
partitioned into four smaller "level 1" blocks of equal size, if needed.
|
||||
Likewise, each level 1 block is itself a quad-block that can be partitioned
|
||||
into four smaller "level 2" blocks in a similar way, and so on.
|
||||
Thus, memory pool blocks can be recursively partitioned into quarters
|
||||
until blocks of the minimum size are obtained,
|
||||
at which point no further partitioning can occur.
|
||||
|
||||
A memory pool keeps track of how its buffer space has been partitioned
|
||||
using an array of *block set* data structures. There is one block set
|
||||
for each partitioning level supported by the pool, or (to put it another way)
|
||||
for each block size. A block set keeps track of all free blocks of its
|
||||
associated size using an array of *quad-block status* data structures.
|
||||
|
||||
When an application issues a request for a memory block,
|
||||
the memory pool first determines the size of the smallest block
|
||||
that will satisfy the request, and examines the corresponding block set.
|
||||
If the block set contains a free block, the block is marked as used
|
||||
and the allocation process is complete.
|
||||
If the block set does not contain a free block,
|
||||
the memory pool attempts to create one automatically by splitting a free block
|
||||
of a larger size or by merging free blocks of smaller sizes;
|
||||
if a suitable block can't be created, the allocation request fails.
|
||||
|
||||
The memory pool's merging algorithm cannot combine adjacent free
|
||||
blocks of different sizes, nor can it merge adjacent free blocks of
|
||||
the same size if they belong to different parent quad-blocks. As a
|
||||
consequence, memory fragmentation issues can still be encountered when
|
||||
using a memory pool.
|
||||
|
||||
When an application releases a previously allocated memory block it is
|
||||
combined synchronously with its three "partner" blocks if possible,
|
||||
and recursively so up through the levels. This is done in constant
|
||||
time, and quickly, so no manual "defragmentation" management is
|
||||
needed.
|
||||
|
||||
Implementation
|
||||
**************
|
||||
|
||||
Defining a Memory Pool
|
||||
======================
|
||||
|
||||
A memory pool is defined using a variable of type :c:struct:`k_mem_pool`.
|
||||
However, since a memory pool also requires a number of variable-size data
|
||||
structures to represent its block sets and the status of its quad-blocks,
|
||||
the kernel does not support the runtime definition of a memory pool.
|
||||
A memory pool can only be defined and initialized at compile time
|
||||
by calling :c:macro:`K_MEM_POOL_DEFINE`.
|
||||
|
||||
The following code defines and initializes a memory pool that has 3 blocks
|
||||
of 4096 bytes each, which can be partitioned into blocks as small as 64 bytes
|
||||
and is aligned to a 4-byte boundary.
|
||||
(That is, the memory pool supports block sizes of 4096, 1024, 256,
|
||||
and 64 bytes.)
|
||||
Observe that the macro defines all of the memory pool data structures,
|
||||
as well as its buffer.
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
K_MEM_POOL_DEFINE(my_pool, 64, 4096, 3, 4);
|
||||
|
||||
Allocating a Memory Block
|
||||
=========================
|
||||
|
||||
A memory block is allocated by calling :c:func:`k_mem_pool_alloc`.
|
||||
|
||||
The following code builds on the example above, and waits up to 100 milliseconds
|
||||
for a 200 byte memory block to become available, then fills it with zeroes.
|
||||
A warning is issued if a suitable block is not obtained.
|
||||
|
||||
Note that the application will actually receive a 256 byte memory block,
|
||||
since that is the closest matching size supported by the memory pool.
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct k_mem_block block;
|
||||
|
||||
if (k_mem_pool_alloc(&my_pool, &block, 200, 100) == 0)) {
|
||||
memset(block.data, 0, 200);
|
||||
...
|
||||
} else {
|
||||
printf("Memory allocation time-out");
|
||||
}
|
||||
|
||||
Memory blocks may also be allocated with :c:func:`malloc`-like semantics
|
||||
using :c:func:`k_mem_pool_malloc`. Such allocations must be freed with
|
||||
:c:func:`k_free`.
|
||||
|
||||
Releasing a Memory Block
|
||||
========================
|
||||
|
||||
A memory block is released by calling either :c:func:`k_mem_pool_free`
|
||||
or :c:func:`k_free`, depending on how it was allocated.
|
||||
|
||||
The following code builds on the example above, and allocates a 75 byte
|
||||
memory block, then releases it once it is no longer needed. (A 256 byte
|
||||
memory block is actually used to satisfy the request.)
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
struct k_mem_block block;
|
||||
|
||||
k_mem_pool_alloc(&my_pool, &block, 75, K_FOREVER);
|
||||
... /* use memory block */
|
||||
k_mem_pool_free(&block);
|
||||
|
||||
Thread Resource Pools
|
||||
*********************
|
||||
|
||||
Certain kernel APIs may need to make heap allocations on behalf of the
|
||||
calling thread. For example, some initialization APIs for objects like
|
||||
pipes and message queues may need to allocate a private kernel-side buffer,
|
||||
or objects like queues may temporarily allocate kernel data structures
|
||||
as items are placed in the queue.
|
||||
|
||||
Such memory allocations are drawn from memory pools that are assigned to
|
||||
a thread. By default, a thread in the system has no resource pool and
|
||||
any allocations made on its behalf will fail. The supervisor-mode only
|
||||
:c:func:`k_thread_resource_pool_assign` will associate any implicit
|
||||
kernel-side allocations to the target thread with the provided memory pool,
|
||||
and any children of that thread will inherit this assignment.
|
||||
|
||||
If a system heap exists, threads may alternatively have their resources
|
||||
drawn from it using the :c:func:`k_thread_system_pool_assign` API.
|
||||
|
||||
Suggested Uses
|
||||
**************
|
||||
|
||||
Use a memory pool to allocate memory in variable-size blocks.
|
||||
|
||||
Use memory pool blocks when sending large amounts of data from one thread
|
||||
to another, to avoid unnecessary copying of the data.
|
||||
|
||||
API Reference
|
||||
*************
|
||||
|
||||
.. doxygengroup:: mem_pool_apis
|
||||
:project: Zephyr
|
|
@ -80,7 +80,7 @@ static struct buf_descriptor __aligned(512) bdt[(NUM_OF_EP_MAX) * 2 * 2];
|
|||
|
||||
#define EP_BUF_NUMOF_BLOCKS (NUM_OF_EP_MAX / 2)
|
||||
|
||||
K_MEM_POOL_DEFINE(ep_buf_pool, 16, 512, EP_BUF_NUMOF_BLOCKS, 4);
|
||||
Z_MEM_POOL_DEFINE(ep_buf_pool, 16, 512, EP_BUF_NUMOF_BLOCKS, 4);
|
||||
|
||||
struct usb_ep_ctrl_data {
|
||||
struct ep_status {
|
||||
|
@ -353,14 +353,14 @@ int usb_dc_ep_configure(const struct usb_dc_ep_cfg_data * const cfg)
|
|||
}
|
||||
|
||||
if (bdt[idx_even].buf_addr) {
|
||||
k_mem_pool_free(block);
|
||||
z_mem_pool_free(block);
|
||||
}
|
||||
|
||||
USB0->ENDPOINT[ep_idx].ENDPT = 0;
|
||||
(void)memset(&bdt[idx_even], 0, sizeof(struct buf_descriptor));
|
||||
(void)memset(&bdt[idx_odd], 0, sizeof(struct buf_descriptor));
|
||||
|
||||
if (k_mem_pool_alloc(&ep_buf_pool, block, cfg->ep_mps * 2U, K_MSEC(10)) == 0) {
|
||||
if (z_mem_pool_alloc(&ep_buf_pool, block, cfg->ep_mps * 2U, K_MSEC(10)) == 0) {
|
||||
(void)memset(block->data, 0, cfg->ep_mps * 2U);
|
||||
} else {
|
||||
LOG_ERR("Memory allocation time-out");
|
||||
|
|
|
@ -178,7 +178,7 @@ struct usbd_event {
|
|||
#error Invalid USBD event queue size (CONFIG_USB_NRFX_EVT_QUEUE_SIZE).
|
||||
#endif
|
||||
|
||||
K_MEM_POOL_DEFINE(fifo_elem_pool, FIFO_ELEM_MIN_SZ, FIFO_ELEM_MAX_SZ,
|
||||
Z_MEM_POOL_DEFINE(fifo_elem_pool, FIFO_ELEM_MIN_SZ, FIFO_ELEM_MAX_SZ,
|
||||
CONFIG_USB_NRFX_EVT_QUEUE_SIZE, FIFO_ELEM_ALIGN);
|
||||
|
||||
/**
|
||||
|
@ -233,7 +233,7 @@ K_MEM_POOL_DEFINE(fifo_elem_pool, FIFO_ELEM_MIN_SZ, FIFO_ELEM_MAX_SZ,
|
|||
/** 4 Byte Buffer alignment required by hardware */
|
||||
#define EP_BUF_POOL_ALIGNMENT sizeof(unsigned int)
|
||||
|
||||
K_MEM_POOL_DEFINE(ep_buf_pool, EP_BUF_POOL_BLOCK_MIN_SZ,
|
||||
Z_MEM_POOL_DEFINE(ep_buf_pool, EP_BUF_POOL_BLOCK_MIN_SZ,
|
||||
EP_BUF_POOL_BLOCK_MAX_SZ, EP_BUF_POOL_BLOCK_COUNT,
|
||||
EP_BUF_POOL_ALIGNMENT);
|
||||
|
||||
|
@ -406,7 +406,7 @@ static inline void usbd_work_schedule(void)
|
|||
*/
|
||||
static inline void usbd_evt_free(struct usbd_event *ev)
|
||||
{
|
||||
k_mem_pool_free(&ev->block);
|
||||
z_mem_pool_free(&ev->block);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -455,7 +455,7 @@ static inline struct usbd_event *usbd_evt_alloc(void)
|
|||
struct usbd_event *ev;
|
||||
struct k_mem_block block;
|
||||
|
||||
ret = k_mem_pool_alloc(&fifo_elem_pool, &block,
|
||||
ret = z_mem_pool_alloc(&fifo_elem_pool, &block,
|
||||
sizeof(struct usbd_event),
|
||||
K_NO_WAIT);
|
||||
|
||||
|
@ -470,7 +470,7 @@ static inline struct usbd_event *usbd_evt_alloc(void)
|
|||
*/
|
||||
usbd_evt_flush();
|
||||
|
||||
ret = k_mem_pool_alloc(&fifo_elem_pool, &block,
|
||||
ret = z_mem_pool_alloc(&fifo_elem_pool, &block,
|
||||
sizeof(struct usbd_event),
|
||||
K_NO_WAIT);
|
||||
if (ret < 0) {
|
||||
|
@ -635,7 +635,6 @@ static int eps_ctx_init(void)
|
|||
for (i = 0U; i < CFG_EPIN_CNT; i++) {
|
||||
ep_ctx = in_endpoint_ctx(i);
|
||||
__ASSERT_NO_MSG(ep_ctx);
|
||||
|
||||
ep_ctx_reset(ep_ctx);
|
||||
}
|
||||
|
||||
|
@ -644,7 +643,7 @@ static int eps_ctx_init(void)
|
|||
__ASSERT_NO_MSG(ep_ctx);
|
||||
|
||||
if (!ep_ctx->buf.block.data) {
|
||||
err = k_mem_pool_alloc(&ep_buf_pool, &ep_ctx->buf.block,
|
||||
err = z_mem_pool_alloc(&ep_buf_pool, &ep_ctx->buf.block,
|
||||
EP_BUF_MAX_SZ, K_NO_WAIT);
|
||||
if (err < 0) {
|
||||
LOG_ERR("Buffer alloc failed for EP 0x%02x", i);
|
||||
|
@ -658,7 +657,6 @@ static int eps_ctx_init(void)
|
|||
if (CFG_EP_ISOIN_CNT) {
|
||||
ep_ctx = in_endpoint_ctx(NRF_USBD_EPIN(8));
|
||||
__ASSERT_NO_MSG(ep_ctx);
|
||||
|
||||
ep_ctx_reset(ep_ctx);
|
||||
}
|
||||
|
||||
|
@ -667,7 +665,7 @@ static int eps_ctx_init(void)
|
|||
__ASSERT_NO_MSG(ep_ctx);
|
||||
|
||||
if (!ep_ctx->buf.block.data) {
|
||||
err = k_mem_pool_alloc(&ep_buf_pool, &ep_ctx->buf.block,
|
||||
err = z_mem_pool_alloc(&ep_buf_pool, &ep_ctx->buf.block,
|
||||
ISO_EP_BUF_MAX_SZ,
|
||||
K_NO_WAIT);
|
||||
if (err < 0) {
|
||||
|
@ -696,7 +694,7 @@ static void eps_ctx_uninit(void)
|
|||
for (i = 0U; i < CFG_EPOUT_CNT; i++) {
|
||||
ep_ctx = out_endpoint_ctx(i);
|
||||
__ASSERT_NO_MSG(ep_ctx);
|
||||
k_mem_pool_free(&ep_ctx->buf.block);
|
||||
z_mem_pool_free(&ep_ctx->buf.block);
|
||||
memset(ep_ctx, 0, sizeof(*ep_ctx));
|
||||
}
|
||||
|
||||
|
@ -709,7 +707,7 @@ static void eps_ctx_uninit(void)
|
|||
if (CFG_EP_ISOOUT_CNT) {
|
||||
ep_ctx = out_endpoint_ctx(NRF_USBD_EPOUT(8));
|
||||
__ASSERT_NO_MSG(ep_ctx);
|
||||
k_mem_pool_free(&ep_ctx->buf.block);
|
||||
z_mem_pool_free(&ep_ctx->buf.block);
|
||||
memset(ep_ctx, 0, sizeof(*ep_ctx));
|
||||
}
|
||||
}
|
||||
|
|
146
include/kernel.h
146
include/kernel.h
|
@ -4043,40 +4043,18 @@ extern int k_mbox_get(struct k_mbox *mbox, struct k_mbox_msg *rx_msg,
|
|||
*/
|
||||
extern void k_mbox_data_get(struct k_mbox_msg *rx_msg, void *buffer);
|
||||
|
||||
/**
|
||||
* @brief Retrieve mailbox message data into a memory pool block.
|
||||
*
|
||||
* This routine completes the processing of a received message by retrieving
|
||||
* its data into a memory pool block, then disposing of the message.
|
||||
* The memory pool block that results from successful retrieval must be
|
||||
* returned to the pool once the data has been processed, even in cases
|
||||
* where zero bytes of data are retrieved.
|
||||
*
|
||||
* Alternatively, this routine can be used to dispose of a received message
|
||||
* without retrieving its data. In this case there is no need to return a
|
||||
* memory pool block to the pool.
|
||||
*
|
||||
* This routine allocates a new memory pool block for the data only if the
|
||||
* data is not already in one. If a new block cannot be allocated, the routine
|
||||
* returns a failure code and the received message is left unchanged. This
|
||||
* permits the caller to reattempt data retrieval at a later time or to dispose
|
||||
* of the received message without retrieving its data.
|
||||
*
|
||||
* @param rx_msg Address of a receive message descriptor.
|
||||
* @param pool Address of memory pool, or NULL to discard data.
|
||||
* @param block Address of the area to hold memory pool block info.
|
||||
* @param timeout Time to wait for a memory pool block,
|
||||
* or one of the special values K_NO_WAIT
|
||||
* and K_FOREVER.
|
||||
*
|
||||
* @retval 0 Data retrieved.
|
||||
* @retval -ENOMEM Returned without waiting.
|
||||
* @retval -EAGAIN Waiting period timed out.
|
||||
*/
|
||||
extern int k_mbox_data_block_get(struct k_mbox_msg *rx_msg,
|
||||
extern int z_mbox_data_block_get(struct k_mbox_msg *rx_msg,
|
||||
struct k_mem_pool *pool,
|
||||
struct k_mem_block *block,
|
||||
k_timeout_t timeout);
|
||||
__deprecated
|
||||
static inline int k_mbox_data_block_get(struct k_mbox_msg *rx_msg,
|
||||
struct k_mem_pool *pool,
|
||||
struct k_mem_block *block,
|
||||
k_timeout_t timeout)
|
||||
{
|
||||
return z_mbox_data_block_get(rx_msg, pool, block, timeout);
|
||||
}
|
||||
|
||||
/** @} */
|
||||
|
||||
|
@ -4535,92 +4513,40 @@ void k_heap_free(struct k_heap *h, void *mem);
|
|||
}, \
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Statically define and initialize a memory pool.
|
||||
*
|
||||
* The memory pool's buffer contains @a n_max blocks that are @a max_size bytes
|
||||
* long. The memory pool allows blocks to be repeatedly partitioned into
|
||||
* quarters, down to blocks of @a min_size bytes long. The buffer is aligned
|
||||
* to a @a align -byte boundary.
|
||||
*
|
||||
* If the pool is to be accessed outside the module where it is defined, it
|
||||
* can be declared via
|
||||
*
|
||||
* @note The k_mem_pool
|
||||
* API is implemented on top of a k_heap, which is a more general
|
||||
* purpose allocator which does not make the same promises about
|
||||
* splitting or alignment detailed above. Blocks will be aligned only
|
||||
* to the 8 byte chunk stride of the underlying heap and may point
|
||||
* anywhere within the heap; they are not split into four as
|
||||
* described.
|
||||
*
|
||||
* @code extern struct k_mem_pool <name>; @endcode
|
||||
*
|
||||
* @param name Name of the memory pool.
|
||||
* @param minsz Size of the smallest blocks in the pool (in bytes).
|
||||
* @param maxsz Size of the largest blocks in the pool (in bytes).
|
||||
* @param nmax Number of maximum sized blocks in the pool.
|
||||
* @param align Alignment of the pool's buffer (power of 2).
|
||||
*/
|
||||
#define K_MEM_POOL_DEFINE(name, minsz, maxsz, nmax, align) \
|
||||
__DEPRECATED_MACRO \
|
||||
Z_MEM_POOL_DEFINE(name, minsz, maxsz, nmax, align)
|
||||
|
||||
/**
|
||||
* @brief Allocate memory from a memory pool.
|
||||
*
|
||||
* This routine allocates a memory block from a memory pool.
|
||||
*
|
||||
* @note Can be called by ISRs, but @a timeout must be set to K_NO_WAIT.
|
||||
*
|
||||
* @param pool Address of the memory pool.
|
||||
* @param block Pointer to block descriptor for the allocated memory.
|
||||
* @param size Amount of memory to allocate (in bytes).
|
||||
* @param timeout Waiting period to wait for operation to complete.
|
||||
* Use K_NO_WAIT to return without waiting,
|
||||
* or K_FOREVER to wait as long as necessary.
|
||||
*
|
||||
* @retval 0 Memory allocated. The @a data field of the block descriptor
|
||||
* is set to the starting address of the memory block.
|
||||
* @retval -ENOMEM Returned without waiting.
|
||||
* @retval -EAGAIN Waiting period timed out.
|
||||
*/
|
||||
extern int k_mem_pool_alloc(struct k_mem_pool *pool, struct k_mem_block *block,
|
||||
extern int z_mem_pool_alloc(struct k_mem_pool *pool, struct k_mem_block *block,
|
||||
size_t size, k_timeout_t timeout);
|
||||
__deprecated
|
||||
static inline int k_mem_pool_alloc(struct k_mem_pool *pool,
|
||||
struct k_mem_block *block,
|
||||
size_t size, k_timeout_t timeout)
|
||||
{
|
||||
return z_mem_pool_alloc(pool, block, size, timeout);
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Allocate memory from a memory pool with malloc() semantics
|
||||
*
|
||||
* Such memory must be released using k_free().
|
||||
*
|
||||
* @param pool Address of the memory pool.
|
||||
* @param size Amount of memory to allocate (in bytes).
|
||||
* @return Address of the allocated memory if successful, otherwise NULL
|
||||
*/
|
||||
extern void *k_mem_pool_malloc(struct k_mem_pool *pool, size_t size);
|
||||
extern void *z_mem_pool_malloc(struct k_mem_pool *pool, size_t size);
|
||||
__deprecated
|
||||
static inline void *k_mem_pool_malloc(struct k_mem_pool *pool, size_t size)
|
||||
{
|
||||
return z_mem_pool_malloc(pool, size);
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Free memory allocated from a memory pool.
|
||||
*
|
||||
* This routine releases a previously allocated memory block back to its
|
||||
* memory pool.
|
||||
*
|
||||
* @param block Pointer to block descriptor for the allocated memory.
|
||||
*
|
||||
* @return N/A
|
||||
*/
|
||||
extern void k_mem_pool_free(struct k_mem_block *block);
|
||||
extern void z_mem_pool_free(struct k_mem_block *block);
|
||||
__deprecated
|
||||
static inline void k_mem_pool_free(struct k_mem_block *block)
|
||||
{
|
||||
return z_mem_pool_free(block);
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief Free memory allocated from a memory pool.
|
||||
*
|
||||
* This routine releases a previously allocated memory block back to its
|
||||
* memory pool
|
||||
*
|
||||
* @param id Memory block identifier.
|
||||
*
|
||||
* @return N/A
|
||||
*/
|
||||
extern void k_mem_pool_free_id(struct k_mem_block_id *id);
|
||||
extern void z_mem_pool_free_id(struct k_mem_block_id *id);
|
||||
__deprecated
|
||||
static inline void k_mem_pool_free_id(struct k_mem_block_id *id)
|
||||
{
|
||||
return z_mem_pool_free_id(id);
|
||||
}
|
||||
|
||||
/**
|
||||
* @}
|
||||
|
|
|
@ -979,7 +979,7 @@ extern const struct net_buf_data_cb net_buf_var_cb;
|
|||
*/
|
||||
#define NET_BUF_POOL_VAR_DEFINE(_name, _count, _data_size, _destroy) \
|
||||
static struct net_buf _net_buf_##_name[_count] __noinit; \
|
||||
K_MEM_POOL_DEFINE(net_buf_mem_pool_##_name, 16, _data_size, 1, 4); \
|
||||
Z_MEM_POOL_DEFINE(net_buf_mem_pool_##_name, 16, _data_size, 1, 4); \
|
||||
static const struct net_buf_data_alloc net_buf_data_alloc_##_name = { \
|
||||
.cb = &net_buf_var_cb, \
|
||||
.alloc_data = &net_buf_mem_pool_##_name, \
|
||||
|
|
|
@ -68,7 +68,7 @@ void k_heap_free(struct k_heap *h, void *mem)
|
|||
* backend.
|
||||
*/
|
||||
|
||||
int k_mem_pool_alloc(struct k_mem_pool *p, struct k_mem_block *block,
|
||||
int z_mem_pool_alloc(struct k_mem_pool *p, struct k_mem_block *block,
|
||||
size_t size, k_timeout_t timeout)
|
||||
{
|
||||
block->id.heap = p->heap;
|
||||
|
@ -84,7 +84,7 @@ int k_mem_pool_alloc(struct k_mem_pool *p, struct k_mem_block *block,
|
|||
}
|
||||
}
|
||||
|
||||
void k_mem_pool_free_id(struct k_mem_block_id *id)
|
||||
void z_mem_pool_free_id(struct k_mem_block_id *id)
|
||||
{
|
||||
k_heap_free(id->heap, id->data);
|
||||
}
|
||||
|
|
|
@ -183,7 +183,7 @@ static void mbox_message_dispose(struct k_mbox_msg *rx_msg)
|
|||
|
||||
/* release sender's memory pool block */
|
||||
if (rx_msg->tx_block.data != NULL) {
|
||||
k_mem_pool_free(&rx_msg->tx_block);
|
||||
z_mem_pool_free(&rx_msg->tx_block);
|
||||
rx_msg->tx_block.data = NULL;
|
||||
}
|
||||
|
||||
|
@ -351,7 +351,7 @@ void k_mbox_data_get(struct k_mbox_msg *rx_msg, void *buffer)
|
|||
mbox_message_dispose(rx_msg);
|
||||
}
|
||||
|
||||
int k_mbox_data_block_get(struct k_mbox_msg *rx_msg, struct k_mem_pool *pool,
|
||||
int z_mbox_data_block_get(struct k_mbox_msg *rx_msg, struct k_mem_pool *pool,
|
||||
struct k_mem_block *block, k_timeout_t timeout)
|
||||
{
|
||||
int result;
|
||||
|
@ -375,7 +375,7 @@ int k_mbox_data_block_get(struct k_mbox_msg *rx_msg, struct k_mem_pool *pool,
|
|||
}
|
||||
|
||||
/* allocate memory pool block (even when message size is 0!) */
|
||||
result = k_mem_pool_alloc(pool, block, rx_msg->size, timeout);
|
||||
result = z_mem_pool_alloc(pool, block, rx_msg->size, timeout);
|
||||
if (result != 0) {
|
||||
return result;
|
||||
}
|
||||
|
|
|
@ -8,12 +8,12 @@
|
|||
#include <string.h>
|
||||
#include <sys/math_extras.h>
|
||||
|
||||
void k_mem_pool_free(struct k_mem_block *block)
|
||||
void z_mem_pool_free(struct k_mem_block *block)
|
||||
{
|
||||
k_mem_pool_free_id(&block->id);
|
||||
z_mem_pool_free_id(&block->id);
|
||||
}
|
||||
|
||||
void *k_mem_pool_malloc(struct k_mem_pool *pool, size_t size)
|
||||
void *z_mem_pool_malloc(struct k_mem_pool *pool, size_t size)
|
||||
{
|
||||
struct k_mem_block block;
|
||||
|
||||
|
@ -25,7 +25,7 @@ void *k_mem_pool_malloc(struct k_mem_pool *pool, size_t size)
|
|||
&size)) {
|
||||
return NULL;
|
||||
}
|
||||
if (k_mem_pool_alloc(pool, &block, size, K_NO_WAIT) != 0) {
|
||||
if (z_mem_pool_alloc(pool, &block, size, K_NO_WAIT) != 0) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -43,7 +43,7 @@ void k_free(void *ptr)
|
|||
ptr = (char *)ptr - WB_UP(sizeof(struct k_mem_block_id));
|
||||
|
||||
/* return block to the heap memory pool */
|
||||
k_mem_pool_free_id(ptr);
|
||||
z_mem_pool_free_id(ptr);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -56,13 +56,13 @@ void k_free(void *ptr)
|
|||
* that has the address of the associated memory pool struct.
|
||||
*/
|
||||
|
||||
K_MEM_POOL_DEFINE(_heap_mem_pool, CONFIG_HEAP_MEM_POOL_MIN_SIZE,
|
||||
Z_MEM_POOL_DEFINE(_heap_mem_pool, CONFIG_HEAP_MEM_POOL_MIN_SIZE,
|
||||
CONFIG_HEAP_MEM_POOL_SIZE, 1, 4);
|
||||
#define _HEAP_MEM_POOL (&_heap_mem_pool)
|
||||
|
||||
void *k_malloc(size_t size)
|
||||
{
|
||||
return k_mem_pool_malloc(_HEAP_MEM_POOL, size);
|
||||
return z_mem_pool_malloc(_HEAP_MEM_POOL, size);
|
||||
}
|
||||
|
||||
void *k_calloc(size_t nmemb, size_t size)
|
||||
|
@ -101,7 +101,7 @@ void *z_thread_malloc(size_t size)
|
|||
}
|
||||
|
||||
if (pool) {
|
||||
ret = k_mem_pool_malloc(pool, size);
|
||||
ret = z_mem_pool_malloc(pool, size);
|
||||
} else {
|
||||
ret = NULL;
|
||||
}
|
||||
|
|
|
@ -65,7 +65,7 @@ static void pipe_async_finish(struct k_pipe_async *async_desc)
|
|||
* to prevent the called routines from scheduling a new thread.
|
||||
*/
|
||||
|
||||
k_mem_pool_free(async_desc->desc.block);
|
||||
z_mem_pool_free(async_desc->desc.block);
|
||||
|
||||
if (async_desc->desc.sem != NULL) {
|
||||
k_sem_give(async_desc->desc.sem);
|
||||
|
|
|
@ -9,14 +9,14 @@
|
|||
#include <init.h>
|
||||
#include <sys/mempool.h>
|
||||
|
||||
K_MEM_POOL_DEFINE(lvgl_mem_pool,
|
||||
Z_MEM_POOL_DEFINE(lvgl_mem_pool,
|
||||
CONFIG_LVGL_MEM_POOL_MIN_SIZE,
|
||||
CONFIG_LVGL_MEM_POOL_MAX_SIZE,
|
||||
CONFIG_LVGL_MEM_POOL_NUMBER_BLOCKS, 4);
|
||||
|
||||
void *lvgl_malloc(size_t size)
|
||||
{
|
||||
return k_mem_pool_malloc(&lvgl_mem_pool, size);
|
||||
return z_mem_pool_malloc(&lvgl_mem_pool, size);
|
||||
}
|
||||
|
||||
void lvgl_free(void *ptr)
|
||||
|
|
|
@ -511,7 +511,7 @@ K_THREAD_DEFINE(app_thread, STACK_SIZE,
|
|||
start_app, NULL, NULL, NULL,
|
||||
THREAD_PRIORITY, K_USER, -1);
|
||||
|
||||
static K_MEM_POOL_DEFINE(app_mem_pool, sizeof(uintptr_t), 1024,
|
||||
static Z_MEM_POOL_DEFINE(app_mem_pool, sizeof(uintptr_t), 1024,
|
||||
2, sizeof(uintptr_t));
|
||||
#endif
|
||||
|
||||
|
|
|
@ -85,12 +85,12 @@ struct app_evt_t {
|
|||
#define FIFO_ELEM_COUNT 255
|
||||
#define FIFO_ELEM_ALIGN sizeof(unsigned int)
|
||||
|
||||
K_MEM_POOL_DEFINE(event_elem_pool, FIFO_ELEM_MIN_SZ, FIFO_ELEM_MAX_SZ,
|
||||
Z_MEM_POOL_DEFINE(event_elem_pool, FIFO_ELEM_MIN_SZ, FIFO_ELEM_MAX_SZ,
|
||||
FIFO_ELEM_COUNT, FIFO_ELEM_ALIGN);
|
||||
|
||||
static inline void app_evt_free(struct app_evt_t *ev)
|
||||
{
|
||||
k_mem_pool_free(&ev->block);
|
||||
z_mem_pool_free(&ev->block);
|
||||
}
|
||||
|
||||
static inline void app_evt_put(struct app_evt_t *ev)
|
||||
|
@ -121,14 +121,14 @@ static inline struct app_evt_t *app_evt_alloc(void)
|
|||
struct app_evt_t *ev;
|
||||
struct k_mem_block block;
|
||||
|
||||
ret = k_mem_pool_alloc(&event_elem_pool, &block,
|
||||
ret = z_mem_pool_alloc(&event_elem_pool, &block,
|
||||
sizeof(struct app_evt_t),
|
||||
K_NO_WAIT);
|
||||
if (ret < 0) {
|
||||
LOG_ERR("APP event allocation failed!");
|
||||
app_evt_flush();
|
||||
|
||||
ret = k_mem_pool_alloc(&event_elem_pool, &block,
|
||||
ret = z_mem_pool_alloc(&event_elem_pool, &block,
|
||||
sizeof(struct app_evt_t),
|
||||
K_NO_WAIT);
|
||||
if (ret < 0) {
|
||||
|
|
|
@ -21,7 +21,7 @@ LOG_MODULE_REGISTER(app_a);
|
|||
/* Resource pool for allocations made by the kernel on behalf of system
|
||||
* calls. Needed for k_queue_alloc_append()
|
||||
*/
|
||||
K_MEM_POOL_DEFINE(app_a_resource_pool, 32, 256, 5, 4);
|
||||
Z_MEM_POOL_DEFINE(app_a_resource_pool, 32, 256, 5, 4);
|
||||
|
||||
/* Define app_a_partition, where all globals for this app will be routed.
|
||||
* The partition starting address and size are populated by build system
|
||||
|
|
|
@ -16,7 +16,7 @@ LOG_MODULE_REGISTER(app_b);
|
|||
/* Resource pool for allocations made by the kernel on behalf of system
|
||||
* calls. Needed for k_queue_alloc_append()
|
||||
*/
|
||||
K_MEM_POOL_DEFINE(app_b_resource_pool, 32, 256, 4, 4);
|
||||
Z_MEM_POOL_DEFINE(app_b_resource_pool, 32, 256, 4, 4);
|
||||
|
||||
/* Define app_b_partition, where all globals for this app will be routed.
|
||||
* The partition starting address and size are populated by build system
|
||||
|
|
|
@ -47,7 +47,7 @@ BUILD_ASSERT(CONFIG_FS_LITTLEFS_CACHE_SIZE >= 4);
|
|||
#define CONFIG_FS_LITTLEFS_FC_MEM_POOL_NUM_BLOCKS CONFIG_FS_LITTLEFS_NUM_FILES
|
||||
#endif
|
||||
|
||||
K_MEM_POOL_DEFINE(file_cache_pool,
|
||||
Z_MEM_POOL_DEFINE(file_cache_pool,
|
||||
CONFIG_FS_LITTLEFS_FC_MEM_POOL_MIN_SIZE,
|
||||
CONFIG_FS_LITTLEFS_FC_MEM_POOL_MAX_SIZE,
|
||||
CONFIG_FS_LITTLEFS_FC_MEM_POOL_NUM_BLOCKS, 4);
|
||||
|
@ -175,7 +175,7 @@ static void release_file_data(struct fs_file_t *fp)
|
|||
struct lfs_file_data *fdp = fp->filep;
|
||||
|
||||
if (fdp->config.buffer) {
|
||||
k_mem_pool_free(&fdp->cache_block);
|
||||
z_mem_pool_free(&fdp->cache_block);
|
||||
}
|
||||
|
||||
k_mem_slab_free(&file_data_pool, &fp->filep);
|
||||
|
@ -213,7 +213,7 @@ static int littlefs_open(struct fs_file_t *fp, const char *path,
|
|||
|
||||
memset(fdp, 0, sizeof(*fdp));
|
||||
|
||||
ret = k_mem_pool_alloc(&file_cache_pool, &fdp->cache_block,
|
||||
ret = z_mem_pool_alloc(&file_cache_pool, &fdp->cache_block,
|
||||
lfs->cfg->cache_size, K_NO_WAIT);
|
||||
LOG_DBG("alloc %u file cache: %d", lfs->cfg->cache_size, ret);
|
||||
if (ret != 0) {
|
||||
|
|
|
@ -101,7 +101,7 @@ static uint8_t *mem_pool_data_alloc(struct net_buf *buf, size_t *size,
|
|||
uint8_t *ref_count;
|
||||
|
||||
/* Reserve extra space for k_mem_block_id and ref-count (uint8_t) */
|
||||
if (k_mem_pool_alloc(pool, &block,
|
||||
if (z_mem_pool_alloc(pool, &block,
|
||||
sizeof(struct k_mem_block_id) + 1 + *size,
|
||||
timeout)) {
|
||||
return NULL;
|
||||
|
@ -129,7 +129,7 @@ static void mem_pool_data_unref(struct net_buf *buf, uint8_t *data)
|
|||
|
||||
/* Need to copy to local variable due to alignment */
|
||||
memcpy(&id, ref_count - sizeof(id), sizeof(id));
|
||||
k_mem_pool_free_id(&id);
|
||||
z_mem_pool_free_id(&id);
|
||||
}
|
||||
|
||||
const struct net_buf_data_cb net_buf_var_cb = {
|
||||
|
|
|
@ -20,7 +20,7 @@
|
|||
#endif
|
||||
|
||||
|
||||
K_MEM_POOL_DEFINE(gcov_heap_mem_pool,
|
||||
Z_MEM_POOL_DEFINE(gcov_heap_mem_pool,
|
||||
MALLOC_MIN_BLOCK_SIZE,
|
||||
MALLOC_MAX_HEAP_SIZE, 1, 4);
|
||||
|
||||
|
@ -233,7 +233,7 @@ void gcov_coverage_dump(void)
|
|||
|
||||
size = calculate_buff_size(gcov_list);
|
||||
|
||||
buffer = (uint8_t *) k_mem_pool_malloc(&gcov_heap_mem_pool, size);
|
||||
buffer = (uint8_t *) z_mem_pool_malloc(&gcov_heap_mem_pool, size);
|
||||
if (!buffer) {
|
||||
printk("No Mem available to continue dump\n");
|
||||
goto coverage_dump_end;
|
||||
|
|
|
@ -61,7 +61,7 @@ K_PIPE_DEFINE(PIPE_NOBUFF, 0, 4);
|
|||
K_PIPE_DEFINE(PIPE_SMALLBUFF, 256, 4);
|
||||
K_PIPE_DEFINE(PIPE_BIGBUFF, 4096, 4);
|
||||
|
||||
K_MEM_POOL_DEFINE(DEMOPOOL, 16, 16, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(DEMOPOOL, 16, 16, 1, 4);
|
||||
|
||||
|
||||
/**
|
||||
|
|
|
@ -26,11 +26,11 @@ void mempool_test(void)
|
|||
PRINT_STRING(dashline, output_file);
|
||||
et = BENCH_START();
|
||||
for (i = 0; i < NR_OF_POOL_RUNS; i++) {
|
||||
return_value |= k_mem_pool_alloc(&DEMOPOOL,
|
||||
return_value |= z_mem_pool_alloc(&DEMOPOOL,
|
||||
&block,
|
||||
16,
|
||||
K_FOREVER);
|
||||
k_mem_pool_free(&block);
|
||||
z_mem_pool_free(&block);
|
||||
}
|
||||
et = TIME_STAMP_DELTA_GET(et);
|
||||
check_result();
|
||||
|
|
|
@ -15,8 +15,8 @@
|
|||
#define MAIL_LEN 64
|
||||
/**TESTPOINT: init via K_MBOX_DEFINE*/
|
||||
K_MBOX_DEFINE(kmbox);
|
||||
K_MEM_POOL_DEFINE(mpooltx, 8, MAIL_LEN, 1, 4);
|
||||
K_MEM_POOL_DEFINE(mpoolrx, 8, MAIL_LEN, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(mpooltx, 8, MAIL_LEN, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(mpoolrx, 8, MAIL_LEN, 1, 4);
|
||||
|
||||
static struct k_mbox mbox;
|
||||
|
||||
|
@ -151,7 +151,7 @@ static void tmbox_put(struct k_mbox *pmbox)
|
|||
mmsg.info = ASYNC_PUT_GET_BLOCK;
|
||||
mmsg.size = MAIL_LEN;
|
||||
mmsg.tx_data = NULL;
|
||||
zassert_equal(k_mem_pool_alloc(&mpooltx, &mmsg.tx_block,
|
||||
zassert_equal(z_mem_pool_alloc(&mpooltx, &mmsg.tx_block,
|
||||
MAIL_LEN, K_NO_WAIT), 0, NULL);
|
||||
memcpy(mmsg.tx_block.data, data[info_type], MAIL_LEN);
|
||||
if (info_type == TARGET_SOURCE_THREAD_BLOCK) {
|
||||
|
@ -221,7 +221,7 @@ static void tmbox_put(struct k_mbox *pmbox)
|
|||
/* Dispose of tx mem pool once we receive it */
|
||||
mmsg.size = MAIL_LEN;
|
||||
mmsg.tx_data = NULL;
|
||||
zassert_equal(k_mem_pool_alloc(&mpooltx, &mmsg.tx_block,
|
||||
zassert_equal(z_mem_pool_alloc(&mpooltx, &mmsg.tx_block,
|
||||
MAIL_LEN, K_NO_WAIT), 0, NULL);
|
||||
memcpy(mmsg.tx_block.data, data[0], MAIL_LEN);
|
||||
mmsg.tx_target_thread = K_ANY;
|
||||
|
@ -357,7 +357,7 @@ static void tmbox_get(struct k_mbox *pmbox)
|
|||
}
|
||||
zassert_true(k_mbox_get(pmbox, &mmsg, NULL, K_FOREVER) == 0,
|
||||
NULL);
|
||||
zassert_true(k_mbox_data_block_get
|
||||
zassert_true(z_mbox_data_block_get
|
||||
(&mmsg, &mpoolrx, &rxblock, K_FOREVER) == 0
|
||||
, NULL);
|
||||
zassert_equal(mmsg.info, ASYNC_PUT_GET_BLOCK, NULL);
|
||||
|
@ -365,7 +365,7 @@ static void tmbox_get(struct k_mbox *pmbox)
|
|||
/*verify rxblock*/
|
||||
zassert_true(memcmp(rxblock.data, data[info_type], MAIL_LEN)
|
||||
== 0, NULL);
|
||||
k_mem_pool_free(&rxblock);
|
||||
z_mem_pool_free(&rxblock);
|
||||
break;
|
||||
case INCORRECT_RECEIVER_TID:
|
||||
mmsg.rx_source_thread = random_tid;
|
||||
|
@ -383,7 +383,7 @@ static void tmbox_get(struct k_mbox *pmbox)
|
|||
mmsg.rx_source_thread = K_ANY;
|
||||
zassert_true(k_mbox_get(pmbox, &mmsg, NULL, K_FOREVER) == 0,
|
||||
NULL);
|
||||
zassert_true(k_mbox_data_block_get
|
||||
zassert_true(z_mbox_data_block_get
|
||||
(&mmsg, NULL, NULL, K_FOREVER) == 0,
|
||||
NULL);
|
||||
break;
|
||||
|
@ -401,14 +401,14 @@ static void tmbox_get(struct k_mbox *pmbox)
|
|||
mmsg.size = MAIL_LEN;
|
||||
zassert_true(k_mbox_get(pmbox, &mmsg, NULL, K_FOREVER) == 0,
|
||||
NULL);
|
||||
zassert_true(k_mbox_data_block_get
|
||||
zassert_true(z_mbox_data_block_get
|
||||
(&mmsg, &mpoolrx, &rxblock, K_FOREVER) == 0, NULL);
|
||||
|
||||
/* verfiy */
|
||||
zassert_true(memcmp(rxblock.data, data[1], MAIL_LEN)
|
||||
== 0, NULL);
|
||||
/* free the block */
|
||||
k_mem_pool_free(&rxblock);
|
||||
z_mem_pool_free(&rxblock);
|
||||
|
||||
break;
|
||||
case BLOCK_GET_BUFF_TO_SMALLER_POOL:
|
||||
|
@ -420,7 +420,7 @@ static void tmbox_get(struct k_mbox *pmbox)
|
|||
zassert_true(k_mbox_get(pmbox, &mmsg, NULL, K_FOREVER) == 0,
|
||||
NULL);
|
||||
|
||||
zassert_true(k_mbox_data_block_get
|
||||
zassert_true(z_mbox_data_block_get
|
||||
(&mmsg, &mpoolrx, &rxblock, K_MSEC(1)) == -EAGAIN,
|
||||
NULL);
|
||||
|
||||
|
|
|
@ -10,8 +10,8 @@
|
|||
#define STACK_SIZE (512 + CONFIG_TEST_EXTRA_STACKSIZE)
|
||||
#define MAIL_LEN 64
|
||||
|
||||
K_MEM_POOL_DEFINE(mpooltx, 8, MAIL_LEN, 1, 4);
|
||||
K_MEM_POOL_DEFINE(mpoolrx, 8, MAIL_LEN, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(mpooltx, 8, MAIL_LEN, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(mpoolrx, 8, MAIL_LEN, 1, 4);
|
||||
|
||||
static K_THREAD_STACK_DEFINE(tstack, STACK_SIZE);
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ static inline void dummy_end(struct k_timer *timer)
|
|||
K_THREAD_STACK_DEFINE(test_1_stack, INHERIT_STACK_SIZE);
|
||||
K_THREAD_STACK_DEFINE(parent_thr_stack, STACK_SIZE);
|
||||
K_THREAD_STACK_DEFINE(child_thr_stack, STACK_SIZE);
|
||||
K_MEM_POOL_DEFINE(res_pool, BLK_SIZE_MIN, BLK_SIZE_MAX, BLK_NUM_MAX, BLK_ALIGN);
|
||||
Z_MEM_POOL_DEFINE(res_pool, BLK_SIZE_MIN, BLK_SIZE_MAX, BLK_NUM_MAX, BLK_ALIGN);
|
||||
K_SEM_DEFINE(inherit_sem, SEMAPHORE_INIT_COUNT, SEMAPHORE_MAX_COUNT);
|
||||
K_SEM_DEFINE(sync_sem, SEM_INIT_VAL, SEM_MAX_VAL);
|
||||
K_MUTEX_DEFINE(inherit_mutex);
|
||||
|
|
|
@ -414,7 +414,7 @@ void test_syscall_context(void)
|
|||
k_thread_user_mode_enter(test_syscall_context_user, NULL, NULL, NULL);
|
||||
}
|
||||
|
||||
K_MEM_POOL_DEFINE(test_pool, BUF_SIZE, BUF_SIZE, 4 * NR_THREADS, 4);
|
||||
Z_MEM_POOL_DEFINE(test_pool, BUF_SIZE, BUF_SIZE, 4 * NR_THREADS, 4);
|
||||
|
||||
void test_main(void)
|
||||
{
|
||||
|
|
|
@ -50,7 +50,7 @@ dummy_test(test_msgq_user_purge_when_put);
|
|||
#else
|
||||
#define MAX_SZ 128
|
||||
#endif
|
||||
K_MEM_POOL_DEFINE(test_pool, 128, MAX_SZ, 2, 4);
|
||||
Z_MEM_POOL_DEFINE(test_pool, 128, MAX_SZ, 2, 4);
|
||||
|
||||
extern struct k_msgq kmsgq;
|
||||
extern struct k_msgq msgq;
|
||||
|
|
|
@ -7,10 +7,10 @@
|
|||
#include <ztest.h>
|
||||
|
||||
#define STACK_SIZE (1024 + CONFIG_TEST_EXTRA_STACKSIZE)
|
||||
#define PIPE_LEN (4 * _MPOOL_MINBLK)
|
||||
#define BYTES_TO_WRITE _MPOOL_MINBLK
|
||||
#define PIPE_LEN (4 * 16)
|
||||
#define BYTES_TO_WRITE 16
|
||||
#define BYTES_TO_READ BYTES_TO_WRITE
|
||||
K_MEM_POOL_DEFINE(mpool, BYTES_TO_WRITE, PIPE_LEN, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(mpool, BYTES_TO_WRITE, PIPE_LEN, 1, 4);
|
||||
|
||||
static ZTEST_DMEM unsigned char __aligned(4) data[] =
|
||||
"abcd1234$%^&PIPEefgh5678!/?*EPIPijkl9012[]<>PEPImnop3456{}()IPEP";
|
||||
|
@ -40,7 +40,7 @@ K_SEM_DEFINE(end_sema, 0, 1);
|
|||
#else
|
||||
#define SZ 128
|
||||
#endif
|
||||
K_MEM_POOL_DEFINE(test_pool, SZ, SZ, 4, 4);
|
||||
Z_MEM_POOL_DEFINE(test_pool, SZ, SZ, 4, 4);
|
||||
|
||||
static void tpipe_put(struct k_pipe *ppipe, k_timeout_t timeout)
|
||||
{
|
||||
|
@ -63,7 +63,7 @@ static void tpipe_block_put(struct k_pipe *ppipe, struct k_sem *sema,
|
|||
|
||||
for (int i = 0; i < PIPE_LEN; i += BYTES_TO_WRITE) {
|
||||
/**TESTPOINT: pipe block put*/
|
||||
zassert_equal(k_mem_pool_alloc(&mpool, &block, BYTES_TO_WRITE,
|
||||
zassert_equal(z_mem_pool_alloc(&mpool, &block, BYTES_TO_WRITE,
|
||||
timeout), 0, NULL);
|
||||
memcpy(block.data, &data[i], BYTES_TO_WRITE);
|
||||
k_pipe_block_put(ppipe, &block, BYTES_TO_WRITE, sema);
|
||||
|
@ -344,7 +344,7 @@ void test_pipe_get_put(void)
|
|||
}
|
||||
/**
|
||||
* @brief Test resource pool free
|
||||
* @see k_mem_pool_malloc()
|
||||
* @see z_mem_pool_malloc()
|
||||
*/
|
||||
#ifdef CONFIG_USERSPACE
|
||||
void test_resource_pool_auto_free(void)
|
||||
|
@ -352,8 +352,8 @@ void test_resource_pool_auto_free(void)
|
|||
/* Pool has 2 blocks, both should succeed if kernel object and pipe
|
||||
* buffer are auto-freed when the allocating threads exit
|
||||
*/
|
||||
zassert_true(k_mem_pool_malloc(&test_pool, 64) != NULL, NULL);
|
||||
zassert_true(k_mem_pool_malloc(&test_pool, 64) != NULL, NULL);
|
||||
zassert_true(z_mem_pool_malloc(&test_pool, 64) != NULL, NULL);
|
||||
zassert_true(z_mem_pool_malloc(&test_pool, 64) != NULL, NULL);
|
||||
}
|
||||
#endif
|
||||
|
||||
|
@ -404,7 +404,7 @@ void test_half_pipe_saturating_block_put(void)
|
|||
|
||||
/* Ensure half the mempool is still queued in the pipe */
|
||||
for (nb = 0; nb < ARRAY_SIZE(blocks); nb++) {
|
||||
if (k_mem_pool_alloc(&mpool, &blocks[nb],
|
||||
if (z_mem_pool_alloc(&mpool, &blocks[nb],
|
||||
BYTES_TO_WRITE, K_NO_WAIT) != 0) {
|
||||
break;
|
||||
}
|
||||
|
@ -414,7 +414,7 @@ void test_half_pipe_saturating_block_put(void)
|
|||
zassert_true(nb >= 2 && nb < ARRAY_SIZE(blocks), NULL);
|
||||
|
||||
for (int i = 0; i < nb; i++) {
|
||||
k_mem_pool_free(&blocks[i]);
|
||||
z_mem_pool_free(&blocks[i]);
|
||||
}
|
||||
|
||||
tpipe_get(&khalfpipe, K_FOREVER);
|
||||
|
|
|
@ -20,7 +20,7 @@ extern void test_poll_grant_access(void);
|
|||
#define MAX_SZ 128
|
||||
#endif
|
||||
|
||||
K_MEM_POOL_DEFINE(test_pool, 128, MAX_SZ, 4, 4);
|
||||
Z_MEM_POOL_DEFINE(test_pool, 128, MAX_SZ, 4, 4);
|
||||
|
||||
/*test case main entry*/
|
||||
void test_main(void)
|
||||
|
|
|
@ -25,7 +25,7 @@ dummy_test(test_auto_free);
|
|||
#else
|
||||
#define MAX_SZ 96
|
||||
#endif
|
||||
K_MEM_POOL_DEFINE(test_pool, 16, MAX_SZ, 4, 4);
|
||||
Z_MEM_POOL_DEFINE(test_pool, 16, MAX_SZ, 4, 4);
|
||||
|
||||
/*test case main entry*/
|
||||
void test_main(void)
|
||||
|
|
|
@ -11,8 +11,8 @@
|
|||
/**TESTPOINT: init via K_QUEUE_DEFINE*/
|
||||
K_QUEUE_DEFINE(kqueue);
|
||||
|
||||
K_MEM_POOL_DEFINE(mem_pool_fail, 4, _MPOOL_MINBLK, 1, 4);
|
||||
K_MEM_POOL_DEFINE(mem_pool_pass, 4, 64, 4, 4);
|
||||
Z_MEM_POOL_DEFINE(mem_pool_fail, 4, _MPOOL_MINBLK, 1, 4);
|
||||
Z_MEM_POOL_DEFINE(mem_pool_pass, 4, 64, 4, 4);
|
||||
|
||||
struct k_queue queue;
|
||||
static qdata_t data[LIST_LEN];
|
||||
|
@ -313,7 +313,7 @@ void test_queue_alloc(void)
|
|||
* there's some base minimal memory in there that can be used.
|
||||
* Make sure it's really truly full.
|
||||
*/
|
||||
while (k_mem_pool_alloc(&mem_pool_fail, &block, 1, K_NO_WAIT) == 0) {
|
||||
while (z_mem_pool_alloc(&mem_pool_fail, &block, 1, K_NO_WAIT) == 0) {
|
||||
}
|
||||
|
||||
k_queue_init(&queue);
|
||||
|
|
|
@ -185,7 +185,7 @@ void test_queue_alloc_append_user(void)
|
|||
/**
|
||||
* @brief Test to verify free of allocated elements of queue
|
||||
* @ingroup kernel_queue_tests
|
||||
* @see k_mem_pool_alloc(), k_mem_pool_free()
|
||||
* @see z_mem_pool_alloc(), z_mem_pool_free()
|
||||
*/
|
||||
void test_auto_free(void)
|
||||
{
|
||||
|
@ -200,7 +200,7 @@ void test_auto_free(void)
|
|||
int i;
|
||||
|
||||
for (i = 0; i < 4; i++) {
|
||||
zassert_false(k_mem_pool_alloc(&test_pool, &b[i], 64,
|
||||
zassert_false(z_mem_pool_alloc(&test_pool, &b[i], 64,
|
||||
K_FOREVER),
|
||||
"memory not auto released!");
|
||||
}
|
||||
|
@ -209,7 +209,7 @@ void test_auto_free(void)
|
|||
* case we want to use it again.
|
||||
*/
|
||||
for (i = 0; i < 4; i++) {
|
||||
k_mem_pool_free(&b[i]);
|
||||
z_mem_pool_free(&b[i]);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -66,7 +66,7 @@ static struct k_sem end_sema;
|
|||
|
||||
|
||||
|
||||
K_MEM_POOL_DEFINE(test_pool, 128, 128, 2, 4);
|
||||
Z_MEM_POOL_DEFINE(test_pool, 128, 128, 2, 4);
|
||||
|
||||
extern struct k_stack kstack;
|
||||
extern struct k_stack stack;
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue