One fundamental validation criteria is to never have consecutive free
chunks. If that ever happens we failed to merge them. That means a free
chunk must always be surrounded by used chunks.
It is a pain to extend valid_chunk() with new rules as it is.
So a VALIDATE() macro is introduced to make things easier to work with.
It also allows for isolating each test, possibly making VALIDATE() into
__ASSERT() to determine exactly which test is tripping when debugging.
Finally, because of that new validation rule, sys_heap_validate() must
be modified so not to use valid_chunk() while it is flipping all the
"used" flags. So let's run valid_chunk() up front before alterating
chunk headers.
Now sys_heap_validate() has become justifiably more expensive and a few
emulated targets are about to bust the tests/lib/heap test timeout. So
bump the timeout as well.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This makes the code cleaner wrt bucket_idx() usage on chunks for which
solo_free_header() is true. In such case the bucket_idx() computation
is useless, and potentially undefined anyway.
In the same vain, move the clearing of the used flag out of
free_chunks() as only one of its callers actually needs that.
Makes free_chunks singular as there is only one chunk (potentially
spanning multiple chunk units) to free.
Also some cosmetic changes for better code uniformity.
No functional changes.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Currently printk isn't synchronized except at the byte output level,
leading to interleaving of messages on SMP systems that try to log
simultaneously. This is actually fairly amusing, and actually helpful
occasionally to validate inter-CPU contention down to the "few cycles"
level.
Still, when you're printing data you need to read, you need to be able
to read it. Put a spinlock around each buffered line. This has to
happen in a few places, as there are three different code paths taken
for !USERSPACE, syscall, and user mode.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The width for %p on 32-bit targets should be 8 regardless of
CONFIG_PRINTK64. Adjust the test accordingly.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Some checks in sys_heap_init() depend on the externally provided size
parameter. If the check fails, this would be a bug outside of the heap
code and therefore should be flagged despite the value of
CONFIG_SYS_HEAP_VALIDATE.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add support for 64 bit conversions in a uniformly expressable way by
printing values backwards into a buffer on the stack first. This
allows all operations to work on the low bits of the value and so the
code doesn't need to care (beyond the size of that buffer) about the
word size. This trick also doesn't care about the specifics of the
base value, so in the process this unifies the decimal and hex printk
conversion code to a single function.
This comes at a mild cost in CPU cycles to the decimal converter and
somewhat higher cost to hex (because it's now doing a full div/mod
operation instead of shifting and masking). And stack usage has grown
by a few words to hold the temporary. But the benefits in code size
are substantial (e.g. ~250 bytes of .text on arm32).
Note that this also contains a change to tests/kernel/common to
address what appears to have been a bug in the original converters.
The printk test uses a format string that looks like "%-4x%-2p" and
feeds it the literal arguments "0xABCDEF" and "(char *)42".
Now... clearly both those results are going to overflow the 4 and
2-byte field sizes, so there shouldn't be any whitespace between these
fields. But the test was written to expect two spaces, inexplicably
(yes, I checked: POSIX-compatible printf implementations don't have
those spaces either).
The new code is definitely doing the right thing, so fix the test
instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The current CoAP implementation not perform any checks including
duplicated packets. This add block sequency verification and a
timer to ensures that slow networks works apropriately.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
The current implementation uses a fixed value for max retries. That
value could be good for an wired network like Ethernet. However,
wireless network can suffer with higher packet collision, low reception
signal etc. This refacts the variable to be defined at Kconfig. This
way max retries can be adjust conform the current media.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
The hints variable is used without a defined state. This fill the struct
with zeros to set variable at a well known state.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Current log only prints default log level. Add LOG_LEVEL at updatehub
to switch between log variations based on CONFIG_UPDATEHUB_LOG_LEVEL.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Add support for a C11-style aligned_alloc() in the heap
implementation. This is properly optimized, in the sense that unused
prefix/suffix data around the chosen allocation is returned to the
heap and made available for general allocation.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Miscellaneous refactoring and simplification. No behavioral changes:
Make split_alloc() take and return chunk IDs and not memory pointers,
leaving the conversion between memory/chunks the job of the higher
level sys_heap_alloc() API. This cleans up the internals for code
that wants to do allocation but has its own ideas about what to do
with the resulting chunks.
Add split_chunks() and merge_chunks() utilities to own the linear/size
pointers and have split_alloc() and free_chunks() use them instead of
doing the list management directly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This struct is taking up most of the heap's constant footprint overhead.
We can easily get rid of the list_size member as it is mostly used to
determine if the list is empty, and that can be determined through
other means.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Make the LEFT_SIZE field first and SIZE_AND_USED field last (for an
allocated chunk) so they sit right next to the allocated memory. The
current chunk's SIZE_AND_USED field points to the next (right) chunk,
and from there the LEFT_SIZE field should point back to the current
chunk. Many trivial memory overflows should trip that test.
One way to make this test more robust could involve xor'ing the values
within respective accessor pairs. But at least the fact that the size
value is shifted by one bit already prevent fooling the test with a
same-byte corruption.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We already have chunk #0 containing our struct z_heap and marked as
used. We can add a partial chunk at the very end that is also marked
as used. By doing so there is no longer a need for checking heap
boundaries at run time when merging/splitting chunks meaning fewer
conditionals in the code's hot path.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
It is possible to remove a few fields from struct z_heap, removing
some runtime indirections by doing so:
- The buf pointer is actually the same as the struct z_heap pointer
itself. So let's simply create chunk_buf() that perform a type
conversion. That type is also chunk_unit_t now rather than u64_t so
it can be defined based on CHUNK_UNIT.
- Replace the struct z_heap_bucket pointer by a zero-sized array at the
end of struct z_heap.
- Make chunk #0 into an actual chunk with its own header. This allows
for removing the chunk0 field and streamlining the code. This way
h->chunk0 becomes right_chunk(h, 0). This sets the table for further
simplifications to come.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
By storing the used flag in the LSB, it is no longer necessary to have
a size_mask variable to locate that flag. This produces smaller and
faster code.
Replace the validation check in chunk_set() to base it on the storage
type.
Also clarify the semantics of set_chunk_size() which allows for clearing
the used flag bit unconditionally which simplifies the code further.
The idea of moving the used flag bit into the LEFT_SIZE field was
raised. It turns out that this isn't as beneficial as it may seem
because the used bit is set only once i.e. when the memory is handed off
to a user and the size field becomes frozen at that point. Modifications
on the leftward chunk may still occur and extra instructions to preserve
that bit would be necessary if it were moved there.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Let's provide accessors for getting and setting every field to make the
chunk header layout abstracted away from the main code. Those are:
SIZE_AND_USED: chunk_used(), chunk_size(), set_chunk_used() and
chunk_size().
LEFT_SIZE: left_chunk() and set_left_chunk_size().
FREE_PREV: prev_free_chunk() and set_prev_free_chunk().
FREE_NEXT: next_free_chunk() and set_next_free_chunk().
To be consistent, the former chunk_set_used() is now set_chunk_used().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
First, some renames to make accessors more explicit:
size() --> chunk_size()
used() --> chunk_used()
free_prev() --> prev_free_chunk()
free_next() --> next_free_chunk()
Then, the return type of chunk_size() is changed from chunkid_t to
size_t, and chunk_used() from chunkid_t to bool.
The left_size() accessor is used only once and can be easily substituted
by left_chunk(), so it is removed.
And in free_list_add() the variable b is renamed to bi so to be
consistent with usage in sys_heap_alloc().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The library supports the declaration of JSON arrays as both nested and
top-level elements. However, as the provided encoding functions
json_obj_encode() and json_obj_encode_buf() interpret all input
structures as objects, top-level arrays are encoded as
{"<field_name>":[{...},...,{...}]}
instead of
[{...},...,{...}].
Add new functions json_arr_encode() and json_arr_encode_buf() that
enable top-level JSON array encoding.
Signed-off-by: Markus Fuchs <markus.fuchs@de.sauter-bc.com>
The version as shipped in Newlib itself is coded a bit sloppily for an
embedded environment. We thus want to override it (and make it weak, to
allow user apps to override it in turn, if needed). The desired
properties of the implementation are:
1. It should call _write() (Newlib implementation calls write()).
2. It should be minimal (Newlib implementation allocates message
on the stack, i.e. misses "static const").
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Search for unused eventfd object and just remember its instance in loop
body. Initialize object later, to make it distinct from "search
phase". This change is basically an improvement for readability.
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Anytime a file descriptor context object is updated, we need to
reset its access permissions and initialization state. This
is the most centralized place to do it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fix variable-size string copy patch that introduced a runtime bug that
causes a bus fault.
Fixes#24853.
Signed-off-by: Tahir Akram <mtahirbutt@hotmail.com>
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Based on the current platform a warning can raise becase of missing
string.h include file.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
The conversion from DT_FLASH_AREA to FLASH_AREA macros don't add the
storage flash_map.h include file.
Fixes: #25332
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Convert with a combo of scripts and by hand fixups:
git grep -l DT_FLASH_AREA_.*_ID | \
xargs sed -i -r 's/DT_FLASH_AREA_(.*)_ID/FLASH_AREA_ID(\L\1)/'
git grep -l DT_FLASH_AREA_.*_OFFSET | \
xargs sed -i -r 's/DT_FLASH_AREA_(.*)_OFFSET/FLASH_AREA_OFFSET(\L\1)/'
git grep -l DT_FLASH_AREA_.*_SIZE | \
xargs sed -i -r 's/DT_FLASH_AREA_(.*)_SIZE/FLASH_AREA_SIZE(\L\1)/'
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Update to new timeout api. Without this change UpdateHub don't build
anymore.
Fixes: #25230
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Mostly trivial search-and-replace, except for pthread_rwlock.c, where
we need spread timeout over 2 semaphore operations.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Mostly simple. Note that the CMSIS RTOS2 API specifies timeout values
in system ticks instead of milliseconds, so the conversions here are
able to elide a conversion that the original code had to do. That's a
good thing, but does mean that in practice runtime behavior will not
be 1:1 identical.
Also note that the switch away from legacy timeouts involved a change
to 64 bit timeouts by default, which pushed
tests/portability/cmsis_rtos_v2 over the limit on qemu_xtensa.
Unfortunately CMSIS stacks have a fixed limit we can't increase, so I
turned off 64 bit timeouts (CMSIS apps won't need them by definition
anyway -- their API is 32 bit).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
No complexity here. The CMSIS API was always in milliseconds, needs
nothing but a few wrapper macros for kernel timeout arguments.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
DTLS without peer verification offers no security whatsoever (and is
arguably worse than not using DTLS in the first place).
Change the verification option to require this peer verification. To
use this, it may be necessary to install and use a root certificate.
Signed-off-by: David Brown <david.brown@linaro.org>
Move defines for _RAM_ADDR, _RAM_SIZE, _ROM_ADDR, and _ROM_ADDR into
the linker.ld and thus remove dts_fixup.h. We rework to use
DT_REG_ADDR and DT_REG_SIZE on DT_CHOSEN(zephyr_sram) and
DT_CHOSEN(zephyr_flash).
Also fixup use of _RAM_ADDR/_RAM_SIZE in newlib/libc-hooks.c.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Replace DT_PHYS_RAM_ADDR and DT_RAM_SIZE with DT_REG_ADDR/DT_REG_SIZE
for the DT_CHOSEN(zephyr_sram) node.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The bounds check failed to account for the additional space required
for the terminating NUL after the encoded value was written.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This implements a file descriptor used for event notification that
behaves like the eventfd in Linux.
The eventfd supports nonblocking operation by setting the EFD_NONBLOCK
flag and semaphore operation by settings the EFD_SEMAPHORE flag.
The major use case for this is when using poll() and the sockets that
you poll are dynamic. When a new socket needs to be added to the poll,
there must be some way to wake the thread and update the pollfds before
calling poll again. One way to solve it is to have a timeout set in the
poll call and only update the pollfds during a timeout but that is not
a very nice solution. By instead including an eventfd in the pollfds,
it is possible to wake the polling thread by simply writing to the
eventfd.
Signed-off-by: Tobias Svehagen <tobias.svehagen@gmail.com>
The previous architecture proved unable to support user expectations,
so the API has been rebuilt from first principles. Backward
compatibility cannot be maintained for this change.
Key changes include:
* Formerly the service-provided transition functions were allowed to
sleep, and the manager took care to not invoke them from ISR
context, instead returning an error if unable to initiate a
transition. In the new architecture transition functions are
required to work regardless of calling context: it is the service's
responsibility to guarantee the transition will proceed even if it
needs to be transferred to a thread. This eliminates state machine
complexities related to calling context.
* Constants identifying the visible state of the manager are exposed
to clients through both notification callbacks and a new monitor API
that allows clients to be notified of all state changes.
* Formerly the release operation was async, and would be delayed for the
last release to ensure a client would exist to be notified of any
failures. It is now synchronous.
* Formerly the cancel operation would fail on the last client associated
with a transition. The cancel operation is now synchronous.
* A helper function is provided to safely synchronously release a
request regardless of whether it has completed or is in progress,
satisfying the use case underlying #22974.
* The user-data parameter to asynchronous notification callbacks has
been removed as user data can be retrieved from the CONTAINER_OF
the client data.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Fix thread fault, on user mode, when reading variable rt_clock_base.
For the moment, clock_settime is left without system call:
we don't want to expose clock_settime without figuring out access
control
Signed-off-by: Julien D'Ascenzio <julien.dascenzio@paratronic.fr>
Improve buffer overflow security on probe_cb. This ensures that socket
buffer have fixed lenght and content received by COAP fills properly on
metadata buffer. After that, ensures that metadata content is a valid
string with length lower than metadata size.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>