Commit graph

27 commits

Author SHA1 Message Date
Gerard Marull-Paretas
cbd31d720b lib: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all lib code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-06 19:58:09 +02:00
Damian Krolik
5f5410a0cc sys: heap: support maximum allocated bytes statistic
Besides the current allocated/free bytes, keep track of
the maximum allocated bytes to help determine the heap
size requirements. Also, provide a function to reset
the statistic.

Signed-off-by: Damian Krolik <damian.krolik@nordicsemi.no>
2022-04-13 13:27:28 -07:00
Nicolas Pitre
40a94d2451 lib/os/heap: use BIT() and BIT_MASK() on bit fields
This is a clean way to make MISRA happy.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-12-13 17:16:07 -05:00
Nicolas Pitre
5aeb809374 lib/os/heap-validate: code cleanup
Remove code duplication, use common code idiom, etc.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-11-15 11:03:57 -05:00
Chen Peng1
28f834c914 sys_heap: add check for sys_heap_runtime_stats_get API
add operations to check sys_heap_runtime_stats_get API
in existing sys_heap_validate function.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-11-11 16:21:43 -05:00
Chen Peng1
a71cd8790f heap: add functions to get heap runtime statistics
add functions to get the sys_heap runtime statistics,
include total free bytes, total allocated bytes.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-11-11 16:21:43 -05:00
Yasushi SHOJI
fedab40576 lib: os: heap-validate: Fix wrong chunkid returned by max_chunkid()
With 64 bytes heap and 1 byte allocation on a big heap, we get:

  0   1   2   3   4   5   6   7
| h | h | b | b | c | 1 | s | f |

where
  - h: chunk0 header
  - b: buckets in chunk0
  - c: chunk header for the first allocation
  - 1: chunk mem
  - s: solo free header
  - f: end marker / footer

max_chunkid() was returning h->end_chunk - min_chunk_size(h), which is
5 because min_chunk_size() on a big heap is 2.  This works if you
don't have the solo free header at 6 and the heap is like:

  0   1   2   3   4   5   6
| h | h | b | b | c | 1 | f |

max_chunkid() in this case gives you 6 - 2 = 4, which is the right
chunkid for the last chunk header.

This commit replaces max_chunkid() with h->end_chunk and "<=" (less
than or equal to) with "<" (less than), so that it always compares
against the end maker chunkid, but the code won't touch the end maker
itself.

Signed-off-by: Yasushi SHOJI <yashi@spacecubics.com>
2021-06-23 06:18:44 -04:00
Jennifer Williams
9517b87d35 lib: os: add final else where missing in heap*
heap* had several places missing final else statement in the
if else if construct. This commit adds else {} to comply with
coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-28 20:28:19 -04:00
Anas Nashif
b8312fab4c Revert "lib: os: various places fix missing final else"
This reverts commit 163b7f0d82.

This is causing test failures, see #34624

Fixes #34624

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-27 22:42:00 -04:00
Jennifer Williams
163b7f0d82 lib: os: various places fix missing final else
The lib/os/ had several places missing final else
statement in the if else if construct. This commit adds
else {} or simple refactor to comply with coding guideline 15.7.
- cbprintf_complete.c
- cbprintf_nano.c
- heap-validate.c
- heap.c
- onoff.c
- p4wq.c
- sem.c

Also resolves the checkpatch issue of comments should align * on
each line.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams
efc78b5b46 lib: os: fix heap_print_info missing final else in construct
The if ... else if ... construct was missing the final else.
This commit refactors it to comply with coding guideline 15.7.
The logic is to check if used or free, and do not increment
for the reserved chunks (first/last) in the heap.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-14 09:20:20 -04:00
Anas Nashif
1b6933d231 kernel: heap: rename 'free' and 'alloc'
This symbol is reserved and usage of reserved symbols violates the
coding guidelines. (MISRA 21.2)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-25 07:28:37 -04:00
Nicolas Pitre
b1eefc0c26 lib/os/heap: straighten up our type usage
The size_t usage, especially in struct z_heap_bucket made the heap
header almost 2x bigger than it needs to be on 64-bit systems.
This prompted me to clean up our type usage to make the code more
efficient and easier to understand. From now on:

- chunkid_t is for absolute chunk position measured in chunk units
- chunksz_t is for chunk sizes measured in chunk units
- size_t is for buffer sizes measured in bytes

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:39 -04:00
Nicolas Pitre
e919bb2d16 lib/os/heap: abstract conversion from chunk size to usable bytes
This is the reverse of bytes_to_chunksz().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:39 -04:00
Nicolas Pitre
a54e101a1e lib/os/heap: rename struct z_heap.len to struct z_heap.end_chunk
The end marker chunk was represented by the len field of struct z_heap.
It is now renamed to end_chunk to make it more obvious what it is.

And while at it...

Given that it is used in size_too_big() to cap the allocation size
already, we no longer need to test the bucket index against the
biggest index possible derived from end_chunk in alloc_chunk(). The
corresponding bucket_idx() call is relatively expensive on some
architectures so avoiding it (turning it into a CHECK() instead) is
a good thing.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:39 -04:00
Nicolas Pitre
f2fd0e8bb6 lib/os/heap: make printed heap info more useful
Turn sys_heap_dump() into sys_heap_print_info() to better reflect
what it actually does, and improve the information being printed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 16:06:53 +01:00
Nicolas Pitre
9b538e4079 lib/os/heap: make "solo free headers" into first-class citizens
Instead of limiting the excess split-off to sufficiently large chunks
in split_alloc(), let's allow normal allocations to create "solo free
headers" just like with aligned allocations. There is no point leaving
them in the allocated chunk if the user didn't ask for it. Doing so
makes them eligible for merging at the next opportunity and potentially
reusable sooner.

Also make the validation code aware of them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-07-14 19:35:52 -04:00
Nicolas Pitre
130963ad2f lib/os/heap: add an additional validation criteria
One fundamental validation criteria is to never have consecutive free
chunks. If that ever happens we failed to merge them. That means a free
chunk must always be surrounded by used chunks.

It is a pain to extend valid_chunk() with new rules as it is.
So a VALIDATE() macro is introduced to make things easier to work with.
It also allows for isolating each test, possibly making VALIDATE() into
__ASSERT() to determine exactly which test is tripping when debugging.

Finally, because of that new validation rule, sys_heap_validate() must
be modified so not to use valid_chunk() while it is flipping all the
"used" flags. So let's run valid_chunk() up front before alterating
chunk headers.

Now sys_heap_validate() has become justifiably more expensive and a few
emulated targets are about to bust the tests/lib/heap test timeout. So
bump the timeout as well.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-07-14 19:35:52 -04:00
Nicolas Pitre
64af35049c lib/os/heap: debugging facility to dump the heap structure to the cconsole
It is linked in only when used, so handy to always have it around for
analysis purposes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-26 11:41:43 -07:00
Nicolas Pitre
ad59e923e9 sys_heap: reduce the size of struct z_heap_bucket by half
This struct is taking up most of the heap's constant footprint overhead.
We can easily get rid of the list_size member as it is mostly used to
determine if the list is empty, and that can be determined through
other means.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-21 19:25:35 +02:00
Nicolas Pitre
d1125d21d4 sys_heap: remove need for last_chunk()
We already have chunk #0 containing our struct z_heap and marked as
used. We can add a partial chunk at the very end that is also marked
as used. By doing so there is no longer a need for checking heap
boundaries at run time when merging/splitting chunks meaning fewer
conditionals in the code's hot path.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-21 19:25:35 +02:00
Nicolas Pitre
6d827fa080 sys_heap: introduce min_chunk_size()
With this we can remove magic constants, especially those used with
big_heap().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-21 19:25:35 +02:00
Nicolas Pitre
e553161b8e sys_heap: optimize struct z_heap
It is possible to remove a few fields from struct z_heap, removing
some runtime indirections by doing so:

- The buf pointer is actually the same as the struct z_heap pointer
  itself. So let's simply create chunk_buf() that perform a type
  conversion. That type is also chunk_unit_t now rather than u64_t so
  it can be defined based on CHUNK_UNIT.

- Replace the struct z_heap_bucket pointer by a zero-sized array at the
  end of struct z_heap.

- Make chunk #0 into an actual chunk with its own header. This allows
  for removing the chunk0 field and streamlining the code. This way
  h->chunk0 becomes right_chunk(h, 0). This sets the table for further
  simplifications to come.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-21 19:25:35 +02:00
Nicolas Pitre
54950aca01 sys_heap: provide more chunk_fields accessors
Let's provide accessors for getting and setting every field to make the
chunk header layout abstracted away from the main code. Those are:

SIZE_AND_USED: chunk_used(), chunk_size(), set_chunk_used() and
chunk_size().

LEFT_SIZE: left_chunk() and set_left_chunk_size().

FREE_PREV: prev_free_chunk() and set_prev_free_chunk().

FREE_NEXT: next_free_chunk() and set_next_free_chunk().

To be consistent, the former chunk_set_used() is now set_chunk_used().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-21 19:25:35 +02:00
Nicolas Pitre
f97eca26e6 sys_heap: some cleanups to make the code clearer
First, some renames to make accessors more explicit:

  size() --> chunk_size()
  used() --> chunk_used()
  free_prev() --> prev_free_chunk()
  free_next() --> next_free_chunk()

Then, the return type of chunk_size() is changed from chunkid_t to
size_t, and chunk_used() from chunkid_t to bool.

The left_size() accessor is used only once and can be easily substituted
by left_chunk(), so it is removed.

And in free_list_add() the variable b is renamed to bi so to be
consistent with usage in sys_heap_alloc().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2020-06-21 19:25:35 +02:00
Kumar Gala
a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andy Ross
aa4227754c lib/os: Add sys_heap, a new/simpler/faster memory allocator
The existing mem_pool implementation has been an endless source of
frustration.  It's had alignment bugs, it's had racy behavior.  It's
never been particularly fast.  It's outrageously complicated to
configure statically.  And while its fragmentation resistance and
overhead on small blocks is good, it's space efficiencey has always
been very poor due to the four-way buddy scheme.

This patch introduces sys_heap.  It's a more or less conventional
segregated fit allocator with power-of-two buckets.  It doesn't expose
its level structure to the user at all, simply taking an arbitrarily
aligned pointer to memory.  It stores all metadata inside the heap
region.  It allocates and frees by simple pointer and not block ID.
Static initialization is trivial, and runtime initialization is only a
few cycles to format and add one block to a list header.

It has excellent space efficiency.  Chunks can be split arbitrarily in
8 byte units.  Overhead is only four bytes per allocated chunk (eight
bytes for heaps >256kb or on 64 bit systems), plus a log2-sized array
of 2-word bucket headers.  No coarse alignment restrictions on blocks,
they can be split and merged (in units of 8 bytes) arbitrarily.

It has good fragmentation resistance.  Freed blocks are always
immediately merged with adjacent free blocks.  Allocations are
attempted from a sample of the smallest bucket that might fit, falling
back rapidly to the smallest block guaranteed to fit.  Split memory
remaining in the chunk is always returned immediately to the heap for
other allocation.

It has excellent performance with firmly bounded runtime.  All
operations are constant time (though there is a search of the smallest
bucket that has a compile-time-configurable upper bound, setting this
to extreme values results in an effectively linear search of the
list), objectively fast (about a hundred instructions) and amenable to
locked operation.  No more need for fragile lock relaxation trickery.

It also contains an extensive validation and stress test framework,
something that was sorely lacking in the previous implementation.

Note that sys_heap is not a compatible API with sys_mem_pool and
k_mem_pool.  Partial wrappers for those (now-) legacy APIs will appear
later and a deprecation strategy needs to be chosen.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-14 10:05:55 -07:00