MISRA-C Rule 5.3 states that identifiers in inner scope should
not hide identifiers in outer scope.
In the function sys_heap_alloc(), the variable "chunksz"
collide with function named chunksz(). So rename those variable.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Suppress the coverity warning on using the semaphore as
this semaphore is used and freed only in this function.
Fixes: #18960
Signed-off-by: David Leach <david.leach@nxp.com>
Just as NULL pointers should not be dereferenced, they should
not be called either.
Fixes 26723
Signed-off-by: Pete Skeggs <peter.skeggs@nordicsemi.no>
This whole code block is ifdef'ed around
CONFIG_NEWLIB_LIBC_ALIGNED_HEAP_SIZE being NOT defined,
remove as this can never be true.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We've come a long way since this was written to implement
generic ram bounds definitions and MPU capabilities,
use them here.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Set actual display size, obtained via the display driver, when using
static rendering buffers. If the actual screen size is not set an
out-of-bound write could occur in case the maximum resolution settings
for LVGL are larger than the actual screen resolution.
Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
So far semaphore was used with possible values in range 0 to
UINT32_MAX. Each write resulted in semaphore increment. As an example
after two writes and single read eventfd counter was correctly zeroed,
but semaphore counter was not. This means that poll() signalled at this
stage POLLIN (semaphore counter was > 0), but it clearly should
not (eventfd counter == 0). Blocking version of read() was also
returning immediately, returning 0 as previous eventfd counter.
Change read_sem to be a binary semaphore, which counter represents
eventfd counter being zero (when semaphore counter == 0) or
non-zero (when semaphore counter == 1). Try to take the semaphore in
eventfd read() and decrement eventfd counter when semaphore was ready.
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Previously, if the arena size was zero, malloc would always fail.
However, the log message was only visible if debug messages were
enabled. Logging an error will hopefully make it more obvious that
CONFIG_MINIMAL_LIBC_MALLOC_ARENA_SIZE should be >= if the minimal
libc and malloc are both used.
Fixes#26720
Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
After commit 8a6b02b5bf ("lib/os/heap: some code simplification in
sys_heap_aligned_alloc()") it is no longer required to have a "big"
heap for aligned allocations to work on 32-bit targets. While the
natural alignment for returned memory has an offset of 4 within a chunk
unit due to the smaller header size, returning to a chunkid from a
memory pointer with an offset of 8 will fall back onto the proper chunk
number once the 4 is substracted and then divided by 8.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The code is doing a split in split_alloc(), adding the leftover to the
free list, then splitting the suffix away in sys_heap_aligned_alloc(),
removing the former leftover from the free list, combining it with the
suffix and finally adding the combined chunk back to the free list.
Instead, let's have each allocator do their own splitting only once by
moving the split_alloc() processing upstream rather than downstream.
This also allows for the "used" flag to be set only once at the end
rather than being overwritten along the way.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Instead of limiting the excess split-off to sufficiently large chunks
in split_alloc(), let's allow normal allocations to create "solo free
headers" just like with aligned allocations. There is no point leaving
them in the allocated chunk if the user didn't ask for it. Doing so
makes them eligible for merging at the next opportunity and potentially
reusable sooner.
Also make the validation code aware of them.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
One fundamental validation criteria is to never have consecutive free
chunks. If that ever happens we failed to merge them. That means a free
chunk must always be surrounded by used chunks.
It is a pain to extend valid_chunk() with new rules as it is.
So a VALIDATE() macro is introduced to make things easier to work with.
It also allows for isolating each test, possibly making VALIDATE() into
__ASSERT() to determine exactly which test is tripping when debugging.
Finally, because of that new validation rule, sys_heap_validate() must
be modified so not to use valid_chunk() while it is flipping all the
"used" flags. So let's run valid_chunk() up front before alterating
chunk headers.
Now sys_heap_validate() has become justifiably more expensive and a few
emulated targets are about to bust the tests/lib/heap test timeout. So
bump the timeout as well.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This makes the code cleaner wrt bucket_idx() usage on chunks for which
solo_free_header() is true. In such case the bucket_idx() computation
is useless, and potentially undefined anyway.
In the same vain, move the clearing of the used flag out of
free_chunks() as only one of its callers actually needs that.
Makes free_chunks singular as there is only one chunk (potentially
spanning multiple chunk units) to free.
Also some cosmetic changes for better code uniformity.
No functional changes.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Currently printk isn't synchronized except at the byte output level,
leading to interleaving of messages on SMP systems that try to log
simultaneously. This is actually fairly amusing, and actually helpful
occasionally to validate inter-CPU contention down to the "few cycles"
level.
Still, when you're printing data you need to read, you need to be able
to read it. Put a spinlock around each buffered line. This has to
happen in a few places, as there are three different code paths taken
for !USERSPACE, syscall, and user mode.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The width for %p on 32-bit targets should be 8 regardless of
CONFIG_PRINTK64. Adjust the test accordingly.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Some checks in sys_heap_init() depend on the externally provided size
parameter. If the check fails, this would be a bug outside of the heap
code and therefore should be flagged despite the value of
CONFIG_SYS_HEAP_VALIDATE.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add support for 64 bit conversions in a uniformly expressable way by
printing values backwards into a buffer on the stack first. This
allows all operations to work on the low bits of the value and so the
code doesn't need to care (beyond the size of that buffer) about the
word size. This trick also doesn't care about the specifics of the
base value, so in the process this unifies the decimal and hex printk
conversion code to a single function.
This comes at a mild cost in CPU cycles to the decimal converter and
somewhat higher cost to hex (because it's now doing a full div/mod
operation instead of shifting and masking). And stack usage has grown
by a few words to hold the temporary. But the benefits in code size
are substantial (e.g. ~250 bytes of .text on arm32).
Note that this also contains a change to tests/kernel/common to
address what appears to have been a bug in the original converters.
The printk test uses a format string that looks like "%-4x%-2p" and
feeds it the literal arguments "0xABCDEF" and "(char *)42".
Now... clearly both those results are going to overflow the 4 and
2-byte field sizes, so there shouldn't be any whitespace between these
fields. But the test was written to expect two spaces, inexplicably
(yes, I checked: POSIX-compatible printf implementations don't have
those spaces either).
The new code is definitely doing the right thing, so fix the test
instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The current CoAP implementation not perform any checks including
duplicated packets. This add block sequency verification and a
timer to ensures that slow networks works apropriately.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
The current implementation uses a fixed value for max retries. That
value could be good for an wired network like Ethernet. However,
wireless network can suffer with higher packet collision, low reception
signal etc. This refacts the variable to be defined at Kconfig. This
way max retries can be adjust conform the current media.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
The hints variable is used without a defined state. This fill the struct
with zeros to set variable at a well known state.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Current log only prints default log level. Add LOG_LEVEL at updatehub
to switch between log variations based on CONFIG_UPDATEHUB_LOG_LEVEL.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Add support for a C11-style aligned_alloc() in the heap
implementation. This is properly optimized, in the sense that unused
prefix/suffix data around the chosen allocation is returned to the
heap and made available for general allocation.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Miscellaneous refactoring and simplification. No behavioral changes:
Make split_alloc() take and return chunk IDs and not memory pointers,
leaving the conversion between memory/chunks the job of the higher
level sys_heap_alloc() API. This cleans up the internals for code
that wants to do allocation but has its own ideas about what to do
with the resulting chunks.
Add split_chunks() and merge_chunks() utilities to own the linear/size
pointers and have split_alloc() and free_chunks() use them instead of
doing the list management directly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This struct is taking up most of the heap's constant footprint overhead.
We can easily get rid of the list_size member as it is mostly used to
determine if the list is empty, and that can be determined through
other means.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Make the LEFT_SIZE field first and SIZE_AND_USED field last (for an
allocated chunk) so they sit right next to the allocated memory. The
current chunk's SIZE_AND_USED field points to the next (right) chunk,
and from there the LEFT_SIZE field should point back to the current
chunk. Many trivial memory overflows should trip that test.
One way to make this test more robust could involve xor'ing the values
within respective accessor pairs. But at least the fact that the size
value is shifted by one bit already prevent fooling the test with a
same-byte corruption.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We already have chunk #0 containing our struct z_heap and marked as
used. We can add a partial chunk at the very end that is also marked
as used. By doing so there is no longer a need for checking heap
boundaries at run time when merging/splitting chunks meaning fewer
conditionals in the code's hot path.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
It is possible to remove a few fields from struct z_heap, removing
some runtime indirections by doing so:
- The buf pointer is actually the same as the struct z_heap pointer
itself. So let's simply create chunk_buf() that perform a type
conversion. That type is also chunk_unit_t now rather than u64_t so
it can be defined based on CHUNK_UNIT.
- Replace the struct z_heap_bucket pointer by a zero-sized array at the
end of struct z_heap.
- Make chunk #0 into an actual chunk with its own header. This allows
for removing the chunk0 field and streamlining the code. This way
h->chunk0 becomes right_chunk(h, 0). This sets the table for further
simplifications to come.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
By storing the used flag in the LSB, it is no longer necessary to have
a size_mask variable to locate that flag. This produces smaller and
faster code.
Replace the validation check in chunk_set() to base it on the storage
type.
Also clarify the semantics of set_chunk_size() which allows for clearing
the used flag bit unconditionally which simplifies the code further.
The idea of moving the used flag bit into the LEFT_SIZE field was
raised. It turns out that this isn't as beneficial as it may seem
because the used bit is set only once i.e. when the memory is handed off
to a user and the size field becomes frozen at that point. Modifications
on the leftward chunk may still occur and extra instructions to preserve
that bit would be necessary if it were moved there.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Let's provide accessors for getting and setting every field to make the
chunk header layout abstracted away from the main code. Those are:
SIZE_AND_USED: chunk_used(), chunk_size(), set_chunk_used() and
chunk_size().
LEFT_SIZE: left_chunk() and set_left_chunk_size().
FREE_PREV: prev_free_chunk() and set_prev_free_chunk().
FREE_NEXT: next_free_chunk() and set_next_free_chunk().
To be consistent, the former chunk_set_used() is now set_chunk_used().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
First, some renames to make accessors more explicit:
size() --> chunk_size()
used() --> chunk_used()
free_prev() --> prev_free_chunk()
free_next() --> next_free_chunk()
Then, the return type of chunk_size() is changed from chunkid_t to
size_t, and chunk_used() from chunkid_t to bool.
The left_size() accessor is used only once and can be easily substituted
by left_chunk(), so it is removed.
And in free_list_add() the variable b is renamed to bi so to be
consistent with usage in sys_heap_alloc().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The library supports the declaration of JSON arrays as both nested and
top-level elements. However, as the provided encoding functions
json_obj_encode() and json_obj_encode_buf() interpret all input
structures as objects, top-level arrays are encoded as
{"<field_name>":[{...},...,{...}]}
instead of
[{...},...,{...}].
Add new functions json_arr_encode() and json_arr_encode_buf() that
enable top-level JSON array encoding.
Signed-off-by: Markus Fuchs <markus.fuchs@de.sauter-bc.com>
The version as shipped in Newlib itself is coded a bit sloppily for an
embedded environment. We thus want to override it (and make it weak, to
allow user apps to override it in turn, if needed). The desired
properties of the implementation are:
1. It should call _write() (Newlib implementation calls write()).
2. It should be minimal (Newlib implementation allocates message
on the stack, i.e. misses "static const").
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Search for unused eventfd object and just remember its instance in loop
body. Initialize object later, to make it distinct from "search
phase". This change is basically an improvement for readability.
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Anytime a file descriptor context object is updated, we need to
reset its access permissions and initialization state. This
is the most centralized place to do it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fix variable-size string copy patch that introduced a runtime bug that
causes a bus fault.
Fixes#24853.
Signed-off-by: Tahir Akram <mtahirbutt@hotmail.com>
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Based on the current platform a warning can raise becase of missing
string.h include file.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
The conversion from DT_FLASH_AREA to FLASH_AREA macros don't add the
storage flash_map.h include file.
Fixes: #25332
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Convert with a combo of scripts and by hand fixups:
git grep -l DT_FLASH_AREA_.*_ID | \
xargs sed -i -r 's/DT_FLASH_AREA_(.*)_ID/FLASH_AREA_ID(\L\1)/'
git grep -l DT_FLASH_AREA_.*_OFFSET | \
xargs sed -i -r 's/DT_FLASH_AREA_(.*)_OFFSET/FLASH_AREA_OFFSET(\L\1)/'
git grep -l DT_FLASH_AREA_.*_SIZE | \
xargs sed -i -r 's/DT_FLASH_AREA_(.*)_SIZE/FLASH_AREA_SIZE(\L\1)/'
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Update to new timeout api. Without this change UpdateHub don't build
anymore.
Fixes: #25230
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Mostly trivial search-and-replace, except for pthread_rwlock.c, where
we need spread timeout over 2 semaphore operations.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>