shared_multi_heap: Rework framework

Entirely rework the shared_multi_heap framework. Refer to the
documentation for more information.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This commit is contained in:
Carlo Caione 2022-03-29 10:18:06 +02:00 committed by Carles Cufí
commit 1dcea253d2
11 changed files with 409 additions and 252 deletions

View file

@ -5,80 +5,78 @@ Shared Multi Heap
The shared multi-heap memory pool manager uses the multi-heap allocator to The shared multi-heap memory pool manager uses the multi-heap allocator to
manage a set of reserved memory regions with different capabilities / manage a set of reserved memory regions with different capabilities /
attributes (cacheable, non-cacheable, etc...) defined in the DT. attributes (cacheable, non-cacheable, etc...).
The user can request allocation from the shared pool specifying the capability All the different regions can be added at run-time to the shared multi-heap
/ attribute of interest for the memory (cacheable / non-cacheable memory, pool providing an opaque "attribute" value (an integer or enum value) that can
etc...). be used by drivers or applications to request memory with certain capabilities.
The different heaps with their attributes available in the shared pool are This framework is commonly used as follow:
defined into the DT file leveraging the ``reserved-memory`` nodes.
This is a DT example declaring three different memory regions with different 1. At boot time some platform code initialize the shared multi-heap framework
cacheability attributes: ``cacheable`` and ``non-cacheable`` using :c:func:`shared_multi_heap_pool_init()` and add the memory regions to
the pool with :c:func:`shared_multi_heap_add()`, possibly gathering the
needed information for the regions from the DT.
.. code-block:: devicetree 2. Each memory region encoded in a :c:type:`shared_multi_heap_region`
structure. This structure is also carrying an opaque and user-defined
integer value that is used to define the region capabilities (for example:
cacheability, cpu affinity, etc...)
/ { .. code-block:: c
reserved-memory {
compatible = "reserved-memory";
#address-cells = <1>;
#size-cells = <1>;
res0: reserved@42000000 { // Init the shared multi-heap pool
compatible = "shared-multi-heap"; shared_multi_heap_pool_init()
reg = <0x42000000 0x1000>;
capability = "cacheable"; // Fill the struct with the data for cacheable memory
label = "res0"; struct shared_multi_heap_region cacheable_r0 = {
.addr = addr_r0,
.size = size_r0,
.attr = SMH_REG_ATTR_CACHEABLE,
}; };
res1: reserved@43000000 { // Add the region to the pool
compatible = "shared-multi-heap"; shared_multi_heap_add(&cacheable_r0, NULL);
reg = <0x43000000 0x2000>;
capability = "non-cacheable"; // Add another cacheable region
label = "res1"; struct shared_multi_heap_region cacheable_r1 = {
.addr = addr_r1,
.size = size_r1,
.attr = SMH_REG_ATTR_CACHEABLE,
}; };
res2: reserved2@44000000 { shared_multi_heap_add(&cacheable_r0, NULL);
compatible = "shared-multi-heap";
reg = <0x44000000 0x3000>; // Add a non-cacheable region
capability = "cacheable"; struct shared_multi_heap_region non_cacheable_r2 = {
label = "res2"; .addr = addr_r2,
}; .size = size_r2,
.attr = SMH_REG_ATTR_NON_CACHEABLE,
}; };
The user can then request 4K from heap memory ``cacheable`` or shared_multi_heap_add(&non_cacheable_r2, NULL);
``non-cacheable`` using the provided APIs:
3. When a driver or application needs some dynamic memory with a certain
capability, it can use :c:func:`shared_multi_heap_alloc()` (or the aligned
version) to request the memory by using the opaque parameter to select the
correct set of attributes for the needed memory. The framework will take
care of selecting the correct heap (thus memory region) to carve memory
from, based on the opaque parameter and the runtime state of the heaps
(available memory, heap state, etc...)
.. code-block:: c .. code-block:: c
// Allocate 4K from cacheable memory // Allocate 4K from cacheable memory
shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x1000); shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x1000);
// Allocate 4K from non-cacheable // Allocate 4K from non-cacheable memory
shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x1000); shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x1000);
The backend implementation will allocate the memory region from the heap with
the correct attribute and using the region able to accommodate the required size.
Special handling for MMU/MPU
****************************
For MMU/MPU enabled platform sometimes it is required to setup and configure
the memory regions before these are added to the managed pool. This is done at
init time using the :c:func:`shared_multi_heap_pool_init()` function that is
accepting a :c:type:`smh_init_reg_fn_t` callback function. This callback will
be called for each memory region at init time and it can be used to correctly
map the region before this is considered valid and accessible.
Adding new attributes Adding new attributes
********************* *********************
Currently only two memory attributes are supported: ``cacheable`` and The API does not enforce any attributes, but at least it defines the two most
``non-cacheable``. To add a new attribute: common ones: :c:enum:`SMH_REG_ATTR_CACHEABLE` and :c:enum:`SMH_REG_ATTR_NON_CACHEABLE`
1. Add the new ``enum`` for the attribute in the :c:enum:`smh_reg_attr`
2. Add the corresponding attribute name in :file:`shared-multi-heap.yaml`
.. doxygengroup:: shared_multi_heap .. doxygengroup:: shared_multi_heap
:project: Zephyr :project: Zephyr

View file

@ -1,20 +0,0 @@
description: Shared multi-heap memory pool manager
compatible: "shared-multi-heap"
include:
- name: base.yaml
property-allowlist: ['reg', 'label']
properties:
# Keep this is sync with shared_multi_heap.h
capability:
type: string
required: false
description: memory region capability
enum:
- "cacheable"
- "non-cacheable"
label:
required: true

View file

@ -4,34 +4,59 @@
* SPDX-License-Identifier: Apache-2.0 * SPDX-License-Identifier: Apache-2.0
*/ */
/**
* @file
* @brief Public API for Shared Multi-Heap framework
*/
#ifndef ZEPHYR_INCLUDE_MULTI_HEAP_MANAGER_SMH_H_ #ifndef ZEPHYR_INCLUDE_MULTI_HEAP_MANAGER_SMH_H_
#define ZEPHYR_INCLUDE_MULTI_HEAP_MANAGER_SMH_H_ #define ZEPHYR_INCLUDE_MULTI_HEAP_MANAGER_SMH_H_
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
#endif #endif
/** /**
* @brief Shared multi-heap interface * @brief Shared Multi-Heap (SMH) interface
* @defgroup shared_multi_heap Shared multi-heap interface * @defgroup shared_multi_heap Shared multi-heap interface
* @ingroup multi_heap * @ingroup multi_heap
* @{ * @{
* *
* The shared multi-heap manager uses the multi-heap allocator to manage a set * The shared multi-heap manager uses the multi-heap allocator to manage a set
* of reserved memory regions with different capabilities / attributes * of memory regions with different capabilities / attributes (cacheable,
* (cacheable, non-cacheable, etc...) defined in the DT. * non-cacheable, etc...).
* *
* The user can request allocation from the shared pool specifying the * All the different regions can be added at run-time to the shared multi-heap
* capability / attribute of interest for the memory (cacheable / non-cacheable * pool providing an opaque "attribute" value (an integer or enum value) that
* memory, etc...) * can be used by drivers or applications to request memory with certain
* capabilities.
* *
* This framework is commonly used as follow:
*
* - At boot time some platform code initialize the shared multi-heap
* framework using @ref shared_multi_heap_pool_init and add the memory
* regions to the pool with @ref shared_multi_heap_add, possibly gathering
* the needed information for the regions from the DT.
*
* - Each memory region encoded in a @ref shared_multi_heap_region structure.
* This structure is also carrying an opaque and user-defined integer value
* that is used to define the region capabilities (for example:
* cacheability, cpu affinity, etc...)
*
* - When a driver or application needs some dynamic memory with a certain
* capability, it can use @ref shared_multi_heap_alloc (or the aligned
* version) to request the memory by using the opaque parameter to select
* the correct set of attributes for the needed memory. The framework will
* take care of selecting the correct heap (thus memory region) to carve
* memory from, based on the opaque parameter and the runtime state of the
* heaps (available memory, heap state, etc...)
*/ */
/** /**
* @brief Memory region attributes / capabilities * @brief SMH region attributes enumeration type.
*
* Enumeration type for some common memory region attributes.
* *
* ** This list needs to be kept in sync with shared-multi-heap.yaml **
*/ */
enum smh_reg_attr { enum smh_reg_attr {
/** cacheable */ /** cacheable */
@ -44,73 +69,101 @@ enum smh_reg_attr {
SMH_REG_ATTR_NUM, SMH_REG_ATTR_NUM,
}; };
/** Maximum number of standard attributes. */
#define MAX_SHARED_MULTI_HEAP_ATTR SMH_REG_ATTR_NUM
/** /**
* @brief SMH region struct * @brief SMH region struct
* *
* This struct is carrying information about the memory region to be added in * This struct is carrying information about the memory region to be added in
* the multi-heap pool. This is filled by the manager with the information * the multi-heap pool.
* coming from the reserved memory children nodes in the DT.
*/ */
struct shared_multi_heap_region { struct shared_multi_heap_region {
enum smh_reg_attr attr; /** Memory heap attribute */
unsigned int attr;
/** Memory heap starting virtual address */
uintptr_t addr; uintptr_t addr;
/** Memory heap size in bytes */
size_t size; size_t size;
}; };
/**
* @brief Region init function
*
* This is a user-provided function whose responsibility is to setup or
* initialize the memory region passed in input before this is added to the
* heap pool by the shared multi-heap manager. This function can be used by
* architectures using MMU / MPU that must correctly map the region before this
* is considered valid and accessible.
*
* @param reg Pointer to the SMH region structure.
* @param v_addr Virtual address obtained after mapping. For non-MMU
* architectures this value is the physical address of the
* region.
* @param size Size of the region after mapping.
*
* @return True if the region is ready to be added to the heap pool.
* False if the region must be skipped.
*/
typedef bool (*smh_init_reg_fn_t)(struct shared_multi_heap_region *reg,
uint8_t **v_addr, size_t *size);
/** /**
* @brief Init the pool * @brief Init the pool
* *
* Initialize the shared multi-heap pool and hook-up the region init function. * This must be the first function to be called to initialize the shared
* multi-heap pool. All the individual heaps must be added later with @ref
* shared_multi_heap_add.
* *
* @param smh_init_reg_fn The function pointer to the region init function. Can * @note As for the generic multi-heap allocator the expectation is that this
* be NULL for non-MPU / non-MMU architectures. * function will be called at soc- or board-level.
*
* @retval 0 on success.
* @retval -EALREADY when the pool was already inited.
* @retval other errno codes
*/ */
int shared_multi_heap_pool_init(smh_init_reg_fn_t smh_init_reg_fn); int shared_multi_heap_pool_init(void);
/** /**
* @brief Allocate memory from the memory shared multi-heap pool * @brief Allocate memory from the memory shared multi-heap pool
* *
* Allocate a block of memory of the specified size in bytes and with a * Allocates a block of memory of the specified size in bytes and with a
* specified capability / attribute. * specified capability / attribute. The opaque attribute parameter is used
* by the backend to select the correct heap to allocate memory from.
* *
* @param attr Capability / attribute requested for the memory block. * @param attr capability / attribute requested for the memory block.
* @param bytes Requested size of the allocation in bytes. * @param bytes requested size of the allocation in bytes.
* *
* @return A valid pointer to heap memory or NULL if no memory is available. * @retval ptr a valid pointer to heap memory.
* @retval err NULL if no memory is available.
*/ */
void *shared_multi_heap_alloc(enum smh_reg_attr attr, size_t bytes); void *shared_multi_heap_alloc(unsigned int attr, size_t bytes);
/**
* @brief Allocate aligned memory from the memory shared multi-heap pool
*
* Allocates a block of memory of the specified size in bytes and with a
* specified capability / attribute. Takes an additional parameter specifying a
* power of two alignment in bytes.
*
* @param attr capability / attribute requested for the memory block.
* @param align power of two alignment for the returned pointer, in bytes.
* @param bytes requested size of the allocation in bytes.
*
* @retval ptr a valid pointer to heap memory.
* @retval err NULL if no memory is available.
*/
void *shared_multi_heap_aligned_alloc(unsigned int attr, size_t align, size_t bytes);
/** /**
* @brief Free memory from the shared multi-heap pool * @brief Free memory from the shared multi-heap pool
* *
* Free the passed block of memory. * Used to free the passed block of memory that must be the return value of a
* previously call to @ref shared_multi_heap_alloc or @ref
* shared_multi_heap_aligned_alloc.
* *
* @param block Block to free. * @param block block to free, must be a pointer to a block allocated
* by shared_multi_heap_alloc or
* shared_multi_heap_aligned_alloc.
*/ */
void shared_multi_heap_free(void *block); void shared_multi_heap_free(void *block);
/**
* @brief Add an heap region to the shared multi-heap pool
*
* This adds a shared multi-heap region to the multi-heap pool.
*
* @param user_data pointer to any data for the heap.
* @param region pointer to the memory region to be added.
*
* @retval 0 on success.
* @retval -EINVAL when the region attribute is out-of-bound.
* @retval -ENOMEM when there are no more heaps available.
* @retval other errno codes
*/
int shared_multi_heap_add(struct shared_multi_heap_region *region, void *user_data);
/** /**
* @} * @}
*/ */

View file

@ -8,43 +8,31 @@
#include <device.h> #include <device.h>
#include <sys/sys_heap.h> #include <sys/sys_heap.h>
#include <sys/multi_heap.h> #include <sys/multi_heap.h>
#include <linker/linker-defs.h>
#include <multi_heap/shared_multi_heap.h> #include <multi_heap/shared_multi_heap.h>
#define DT_DRV_COMPAT shared_multi_heap
#define NUM_REGIONS DT_NUM_INST_STATUS_OKAY(DT_DRV_COMPAT)
static struct sys_multi_heap shared_multi_heap; static struct sys_multi_heap shared_multi_heap;
static struct sys_heap heap_pool[SMH_REG_ATTR_NUM][NUM_REGIONS]; static struct sys_heap heap_pool[MAX_SHARED_MULTI_HEAP_ATTR][MAX_MULTI_HEAPS];
static smh_init_reg_fn_t smh_init_reg; static unsigned int attr_cnt[MAX_SHARED_MULTI_HEAP_ATTR];
#define FOREACH_REG(n) \
{ .addr = (uintptr_t) LINKER_DT_RESERVED_MEM_GET_PTR(DT_DRV_INST(n)), \
.size = LINKER_DT_RESERVED_MEM_GET_SIZE(DT_DRV_INST(n)), \
.attr = DT_ENUM_IDX(DT_DRV_INST(n), capability), \
},
static struct shared_multi_heap_region dt_region[NUM_REGIONS] = {
DT_INST_FOREACH_STATUS_OKAY(FOREACH_REG)
};
static void *smh_choice(struct sys_multi_heap *mheap, void *cfg, size_t align, size_t size) static void *smh_choice(struct sys_multi_heap *mheap, void *cfg, size_t align, size_t size)
{ {
enum smh_reg_attr attr;
struct sys_heap *h; struct sys_heap *h;
unsigned int attr;
void *block; void *block;
attr = (enum smh_reg_attr) cfg; attr = (unsigned int)(long) cfg;
if (attr >= SMH_REG_ATTR_NUM || size == 0) { if (attr >= MAX_SHARED_MULTI_HEAP_ATTR || size == 0) {
return NULL; return NULL;
} }
for (size_t reg = 0; reg < NUM_REGIONS; reg++) { /* Set in case the user requested a non-existing attr */
h = &heap_pool[attr][reg]; block = NULL;
for (size_t hdx = 0; hdx < attr_cnt[attr]; hdx++) {
h = &heap_pool[attr][hdx];
if (h->heap == NULL) { if (h->heap == NULL) {
return NULL; return NULL;
@ -59,29 +47,30 @@ static void *smh_choice(struct sys_multi_heap *mheap, void *cfg, size_t align, s
return block; return block;
} }
static void smh_init_with_attr(enum smh_reg_attr attr) int shared_multi_heap_add(struct shared_multi_heap_region *region, void *user_data)
{ {
unsigned int slot = 0; static int n_heaps;
uint8_t *mapped; struct sys_heap *h;
size_t size; unsigned int slot;
for (size_t reg = 0; reg < NUM_REGIONS; reg++) { if (region->attr >= MAX_SHARED_MULTI_HEAP_ATTR) {
if (dt_region[reg].attr == attr) { return -EINVAL;
if (smh_init_reg != NULL) {
smh_init_reg(&dt_region[reg], &mapped, &size);
} else {
mapped = (uint8_t *) dt_region[reg].addr;
size = dt_region[reg].size;
} }
sys_heap_init(&heap_pool[attr][slot], mapped, size); /* No more heaps available */
sys_multi_heap_add_heap(&shared_multi_heap, if (n_heaps++ >= MAX_MULTI_HEAPS) {
&heap_pool[attr][slot], &dt_region[reg]); return -ENOMEM;
}
slot++; slot = attr_cnt[region->attr];
} h = &heap_pool[region->attr][slot];
}
sys_heap_init(h, (void *) region->addr, region->size);
sys_multi_heap_add_heap(&shared_multi_heap, h, user_data);
attr_cnt[region->attr]++;
return 0;
} }
void shared_multi_heap_free(void *block) void shared_multi_heap_free(void *block)
@ -89,30 +78,36 @@ void shared_multi_heap_free(void *block)
sys_multi_heap_free(&shared_multi_heap, block); sys_multi_heap_free(&shared_multi_heap, block);
} }
void *shared_multi_heap_alloc(enum smh_reg_attr attr, size_t bytes) void *shared_multi_heap_alloc(unsigned int attr, size_t bytes)
{ {
return sys_multi_heap_alloc(&shared_multi_heap, (void *) attr, bytes); if (attr >= MAX_SHARED_MULTI_HEAP_ATTR) {
return NULL;
} }
int shared_multi_heap_pool_init(smh_init_reg_fn_t smh_init_reg_fn) return sys_multi_heap_alloc(&shared_multi_heap, (void *)(long) attr, bytes);
}
void *shared_multi_heap_aligned_alloc(unsigned int attr, size_t align, size_t bytes)
{ {
smh_init_reg = smh_init_reg_fn; if (attr >= MAX_SHARED_MULTI_HEAP_ATTR) {
return NULL;
}
return sys_multi_heap_aligned_alloc(&shared_multi_heap, (void *)(long) attr,
align, bytes);
}
int shared_multi_heap_pool_init(void)
{
static atomic_t state;
if (!atomic_cas(&state, 0, 1)) {
return -EALREADY;
}
sys_multi_heap_init(&shared_multi_heap, smh_choice); sys_multi_heap_init(&shared_multi_heap, smh_choice);
for (size_t attr = 0; attr < SMH_REG_ATTR_NUM; attr++) { atomic_set(&state, 1);
smh_init_with_attr(attr);
}
return 0; return 0;
} }
static int shared_multi_heap_init(const struct device *dev)
{
__ASSERT_NO_MSG(NUM_REGIONS <= MAX_MULTI_HEAPS);
/* Nothing to do here. */
return 0;
}
SYS_INIT(shared_multi_heap_init, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View file

@ -0,0 +1,36 @@
/*
* Copyright (c) 2021 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
/ {
/delete-node/ memory@38000000;
sram2_3: memory@38000000 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x38000000 0x100000>;
zephyr,memory-region = "SRAM2_3";
};
res0: memory@38100000 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x38100000 0x1000>;
zephyr,memory-region = "RES0";
zephyr,memory-region-mpu = "RAM";
};
res1: memory@38200000 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x38200000 0x2000>;
zephyr,memory-region = "RES1";
zephyr,memory-region-mpu = "RAM_NOCACHE";
};
res2: memory@38300000 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x38300000 0x3000>;
zephyr,memory-region = "RES2";
zephyr,memory-region-mpu = "RAM";
};
};

View file

@ -0,0 +1,2 @@
CONFIG_HAVE_CUSTOM_LINKER_SCRIPT=y
CONFIG_CUSTOM_LINKER_SCRIPT="linker_arm64_shared_pool.ld"

View file

@ -5,30 +5,32 @@
*/ */
/ { / {
reserved-memory { soc {
compatible = "reserved-memory"; res0: memory@42000000 {
#address-cells = <1>; compatible = "zephyr,memory-region", "mmio-sram";
#size-cells = <1>; reg = <0x0 0x42000000 0x0 0x1000>;
zephyr,memory-region = "RES0";
res0: reserved@42000000 { zephyr,memory-region-mpu = "RAM";
compatible = "shared-multi-heap";
reg = <0x42000000 0x1000>;
capability = "cacheable";
label = "res0";
}; };
res1: reserved@43000000 { res1: memory@43000000 {
compatible = "shared-multi-heap"; compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x43000000 0x2000>; reg = <0x0 0x43000000 0x0 0x2000>;
capability = "non-cacheable"; zephyr,memory-region = "RES1";
label = "res1"; zephyr,memory-region-mpu = "RAM_NOCACHE";
}; };
res2: reserved2@44000000 { res_no_mpu: memory@45000000 {
compatible = "shared-multi-heap"; compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x44000000 0x3000>; reg = <0x0 0x45000000 0x0 0x1000>;
capability = "cacheable"; zephyr,memory-region = "RES_NO_MPU";
label = "res2"; };
res2: memory@44000000 {
compatible = "zephyr,memory-region", "mmio-sram";
reg = <0x0 0x44000000 0x0 0x3000>;
zephyr,memory-region = "RES2";
zephyr,memory-region-mpu = "RAM";
}; };
}; };
}; };

View file

@ -6,18 +6,19 @@
#include <linker/sections.h> #include <linker/sections.h>
#include <devicetree.h> #include <devicetree.h>
#include <linker/devicetree_regions.h>
#include <linker/linker-defs.h> #include <linker/linker-defs.h>
#include <linker/linker-tool.h> #include <linker/linker-tool.h>
MEMORY MEMORY
{ {
LINKER_DT_RESERVED_MEM_REGIONS() LINKER_DT_REGIONS()
} }
SECTIONS SECTIONS
{ {
LINKER_DT_RESERVED_MEM_SECTIONS() LINKER_DT_SECTIONS()
} }
#include <arch/arm64/scripts/linker.ld> #include <arch/arm64/scripts/linker.ld>

View file

@ -2,6 +2,4 @@
# SPDX-License-Identifier: Apache-2.0 # SPDX-License-Identifier: Apache-2.0
CONFIG_ZTEST=y CONFIG_ZTEST=y
CONFIG_HAVE_CUSTOM_LINKER_SCRIPT=y
CONFIG_CUSTOM_LINKER_SCRIPT="linker_arm64_shared_pool.ld"
CONFIG_SHARED_MULTI_HEAP=y CONFIG_SHARED_MULTI_HEAP=y

View file

@ -11,101 +11,194 @@
#include <multi_heap/shared_multi_heap.h> #include <multi_heap/shared_multi_heap.h>
#define MAX_REGIONS (3) #define DT_DRV_COMPAT zephyr_memory_region
static struct { #define RES0_CACHE_ADDR DT_REG_ADDR(DT_NODELABEL(res0))
struct shared_multi_heap_region *reg; #define RES1_NOCACHE_ADDR DT_REG_ADDR(DT_NODELABEL(res1))
uint8_t *v_addr; #define RES2_CACHE_ADDR DT_REG_ADDR(DT_NODELABEL(res2))
} map[MAX_REGIONS];
static bool smh_reg_init(struct shared_multi_heap_region *reg, uint8_t **v_addr, size_t *size) struct region_map {
struct shared_multi_heap_region region;
uintptr_t p_addr;
};
#define FOREACH_REG(n) \
{ \
.region = { \
.addr = (uintptr_t) DT_INST_REG_ADDR(n), \
.size = DT_INST_REG_SIZE(n), \
.attr = DT_INST_ENUM_IDX_OR(n, zephyr_memory_region_mpu, \
SMH_REG_ATTR_NUM), \
}, \
},
struct region_map map[] = {
DT_INST_FOREACH_STATUS_OKAY(FOREACH_REG)
};
#if defined(CONFIG_MMU)
static void smh_reg_map(struct shared_multi_heap_region *region)
{ {
static int reg_idx;
uint32_t mem_attr; uint32_t mem_attr;
uint8_t *v_addr;
mem_attr = (reg->attr == SMH_REG_ATTR_CACHEABLE) ? K_MEM_CACHE_WB : K_MEM_CACHE_NONE; mem_attr = (region->attr == SMH_REG_ATTR_CACHEABLE) ? K_MEM_CACHE_WB : K_MEM_CACHE_NONE;
mem_attr |= K_MEM_PERM_RW; mem_attr |= K_MEM_PERM_RW;
z_phys_map(v_addr, reg->addr, reg->size, mem_attr); z_phys_map(&v_addr, region->addr, region->size, mem_attr);
*size = reg->size; region->addr = (uintptr_t) v_addr;
/* Save the mapping to retrieve the region from the vaddr */
map[reg_idx].reg = reg;
map[reg_idx].v_addr = *v_addr;
reg_idx++;
return true;
} }
#endif /* CONFIG_MMU */
static struct shared_multi_heap_region *get_reg_addr(uint8_t *v_addr) /*
* Given a virtual address retrieve the original memory region that the mapping
* is belonging to.
*/
static struct region_map *get_region_map(void *v_addr)
{ {
for (size_t reg = 0; reg < MAX_REGIONS; reg++) { for (size_t reg = 0; reg < ARRAY_SIZE(map); reg++) {
if (v_addr >= map[reg].v_addr && if ((uintptr_t) v_addr >= map[reg].region.addr &&
v_addr < map[reg].v_addr + map[reg].reg->size) { (uintptr_t) v_addr < map[reg].region.addr + map[reg].region.size) {
return map[reg].reg; return &map[reg];
} }
} }
return NULL; return NULL;
} }
static inline enum smh_reg_attr mpu_to_reg_attr(int mpu_attr)
{
/*
* All the memory regions defined in the DT with the MPU property `RAM`
* can be accessed and memory can be retrieved from using the attribute
* `SMH_REG_ATTR_CACHEABLE`.
*
* All the memory regions defined in the DT with the MPU property
* `RAM_NOCACHE` can be accessed and memory can be retrieved from using
* the attribute `SMH_REG_ATTR_NON_CACHEABLE`.
*
* [MPU attr] -> [SMH attr]
*
* RAM -> SMH_REG_ATTR_CACHEABLE
* RAM_NOCACHE -> SMH_REG_ATTR_NON_CACHEABLE
*/
switch (mpu_attr) {
case 0: /* RAM */
return SMH_REG_ATTR_CACHEABLE;
case 1: /* RAM_NOCACHE */
return SMH_REG_ATTR_NON_CACHEABLE;
default:
/* How ? */
ztest_test_fail();
}
/* whatever */
return 0;
}
static void fill_multi_heap(void)
{
struct region_map *reg_map;
for (size_t idx = 0; idx < DT_NUM_INST_STATUS_OKAY(DT_DRV_COMPAT); idx++) {
reg_map = &map[idx];
/* zephyr,memory-region-mpu property not found. Skip it. */
if (reg_map->region.attr == SMH_REG_ATTR_NUM) {
continue;
}
/* Convert MPU attributes to shared-multi-heap capabilities */
reg_map->region.attr = mpu_to_reg_attr(reg_map->region.attr);
/* Assume for now that phys == virt */
reg_map->p_addr = reg_map->region.addr;
#if defined(CONFIG_MMU)
/*
* For MMU-enabled platform we have to MMU-map the physical
* address retrieved by DT at run-time because the SMH
* framework expects virtual addresses.
*
* For MPU-enabled plaform the code is assuming that the region
* are configured at build-time, so no map is needed.
*/
smh_reg_map(&reg_map->region);
#endif /* CONFIG_MMU */
shared_multi_heap_add(&reg_map->region, NULL);
}
}
void test_shared_multi_heap(void) void test_shared_multi_heap(void)
{ {
struct shared_multi_heap_region *reg; struct region_map *reg_map;
uint8_t *block; void *block;
int ret;
shared_multi_heap_pool_init(smh_reg_init); ret = shared_multi_heap_pool_init();
zassert_equal(0, ret, "failed initialization");
/*
* Return -EALREADY if already inited
*/
ret = shared_multi_heap_pool_init();
zassert_equal(-EALREADY, ret, "second init should fail");
/*
* Fill the buffer pool with the memory heaps coming from DT
*/
fill_multi_heap();
/* /*
* Request a small cacheable chunk. It should be allocated in the * Request a small cacheable chunk. It should be allocated in the
* smaller region (@ 0x42000000) * smaller region RES0
*/ */
block = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x40); block = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x40);
reg = get_reg_addr(block); reg_map = get_region_map(block);
zassert_equal(reg->addr, 0x42000000, "block in the wrong memory region"); zassert_equal(reg_map->p_addr, RES0_CACHE_ADDR, "block in the wrong memory region");
zassert_equal(reg->attr, SMH_REG_ATTR_CACHEABLE, "wrong memory attribute"); zassert_equal(reg_map->region.attr, SMH_REG_ATTR_CACHEABLE, "wrong memory attribute");
/* /*
* Request another small cacheable chunk. It should be allocated in the * Request another small cacheable chunk. It should be allocated in the
* smaller cacheable region (@ 0x42000000) * smaller cacheable region RES0
*/ */
block = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x80); block = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x80);
reg = get_reg_addr(block); reg_map = get_region_map(block);
zassert_equal(reg->addr, 0x42000000, "block in the wrong memory region"); zassert_equal(reg_map->p_addr, RES0_CACHE_ADDR, "block in the wrong memory region");
zassert_equal(reg->attr, SMH_REG_ATTR_CACHEABLE, "wrong memory attribute"); zassert_equal(reg_map->region.attr, SMH_REG_ATTR_CACHEABLE, "wrong memory attribute");
/* /*
* Request a big cacheable chunk. It should be allocated in the * Request a big cacheable chunk. It should be allocated in the
* bigger cacheable region (@ 0x44000000) * bigger cacheable region RES2
*/ */
block = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x1200); block = shared_multi_heap_alloc(SMH_REG_ATTR_CACHEABLE, 0x1200);
reg = get_reg_addr(block); reg_map = get_region_map(block);
zassert_equal(reg->addr, 0x44000000, "block in the wrong memory region"); zassert_equal(reg_map->p_addr, RES2_CACHE_ADDR, "block in the wrong memory region");
zassert_equal(reg->attr, SMH_REG_ATTR_CACHEABLE, "wrong memory attribute"); zassert_equal(reg_map->region.attr, SMH_REG_ATTR_CACHEABLE, "wrong memory attribute");
/* /*
* Request a non-cacheable chunk. It should be allocated in the * Request a non-cacheable chunk. It should be allocated in the
* non-cacheable region (@ 0x43000000) * non-cacheable region RES1
*/ */
block = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x100); block = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x100);
reg = get_reg_addr(block); reg_map = get_region_map(block);
zassert_equal(reg->addr, 0x43000000, "block in the wrong memory region"); zassert_equal(reg_map->p_addr, RES1_NOCACHE_ADDR, "block in the wrong memory region");
zassert_equal(reg->attr, SMH_REG_ATTR_NON_CACHEABLE, "wrong memory attribute"); zassert_equal(reg_map->region.attr, SMH_REG_ATTR_NON_CACHEABLE, "wrong memory attribute");
/* /*
* Request again a non-cacheable chunk. It should be allocated in the * Request again a non-cacheable chunk. It should be allocated in the
* non-cacheable region (@ 0x43000000) * non-cacheable region RES1
*/ */
block = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x100); block = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x100);
reg = get_reg_addr(block); reg_map = get_region_map(block);
zassert_equal(reg->addr, 0x43000000, "block in the wrong memory region"); zassert_equal(reg_map->p_addr, RES1_NOCACHE_ADDR, "block in the wrong memory region");
zassert_equal(reg->attr, SMH_REG_ATTR_NON_CACHEABLE, "wrong memory attribute"); zassert_equal(reg_map->region.attr, SMH_REG_ATTR_NON_CACHEABLE, "wrong memory attribute");
/* Request a block too big */ /* Request a block too big */
block = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x10000); block = shared_multi_heap_alloc(SMH_REG_ATTR_NON_CACHEABLE, 0x10000);
@ -116,9 +209,8 @@ void test_shared_multi_heap(void)
zassert_is_null(block, "0 size accepted as valid"); zassert_is_null(block, "0 size accepted as valid");
/* Request a non-existent attribute */ /* Request a non-existent attribute */
block = shared_multi_heap_alloc(SMH_REG_ATTR_NUM + 1, 0x100); block = shared_multi_heap_alloc(MAX_SHARED_MULTI_HEAP_ATTR, 0x100);
zassert_is_null(block, "wrong attribute accepted as valid"); zassert_is_null(block, "wrong attribute accepted as valid");
} }
void test_main(void) void test_main(void)

View file

@ -3,6 +3,6 @@
tests: tests:
kernel.shared_multi_heap: kernel.shared_multi_heap:
platform_allow: qemu_cortex_a53 platform_allow: qemu_cortex_a53 mps2_an521
tags: board multi_heap tags: board multi_heap
harness: ztest harness: ztest