STM32 uart driver doesn't support 9bits transactions in any case,
so remove case were it was declared as supported.
Fixes#31799
Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
Clear Floating Point Status and Control Register (FPSCR),
to prevent from having the interrupt line set to pending again,
in case FPU IRQ is selected by the test as "Available IRQ line"
Fixes#31982
Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
Some recent changes exposed some common "arch_switch() anti-patterns"
in various architectures. The documentation technically described
this all correctly, but probably wasn't as clear as it should have
been. Rewrite, making clear exactly what needs to happen and how the
fields should be interpreted.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync. Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity. The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments. No behavior
change.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL. So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!
In practice this was found on x86 which poisons the switch context
such that it crashes instantly.
Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.
Fixes#28105
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This was merged by mistake without being tested and is not working
properly. We need to avoid doing a BUILD_ASSERT() when the relevant
property is missing, because we can't use DT_GPIO_CTLR() on an
undefined property. Handle this with COND_CODE_1().
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
In binutils SORT is an alias for SORT_BY_NAME. Don't confuse people
by replacing explicit use of the actual directive with an alias for
that same directive.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
When VERSION is changed, do not wait for daily cron and publish
documentation immediately to keep things in sync.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
We have been publishing docs to the wrong folter on AWS S3. We still had
the official docs published to the correct place manually though, so all
was good.
This change will eliminate the manual step of publishing documentation
and will put things where they belong.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add misc changes still needed in release notes for
x86 Boards and SoCs after checking their history:
- zefi.py: Use cross compiler while building zephyr
- boards: x86: ehl_crb: Add board variant for Slim Bootloader
- tests: enable the code coverage report for qemu_x86_64
- drivers/timer: Remove legacy APIC driver
- x86: add common memory.ld
- x86: reserve the first megabyte
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
Fail after pairing request and response have been exchanged if the
selected pairing method would not result in the required security level.
This avoids the case where we would discover this after having encrypted
the connection and disconnect instead.
This was partially attempted but lacked checking for authentication
requirement when L3 was required, as well as skipping the check if L4
was required but remote did not support Secure Connections since the
check was after we had taken the legacy branch.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Set the error in the security changed callback when the encryption has
not reached the required security level.
Terminate the pairing procedure in SMP on failure to avoid the security
changed callback being called twice in this case.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Terminate the pairing procedure when disconnected while this was in
progress. This notifies the application that security has failed and
removes the key entry.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Set the SMP flag encryption pending in the case where a bond exists
with ediv and rand equal to zero, i.e LE Secure Connections bond.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
When receiving unexpected SMP PDUs with no pairing procedures in
progress don't treat it as a pairing procedure that has failed.
This causes unexpected SMP PDUs to trigger the pairing failed and
security changed callback at unexpected times.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Add status only pairing failed callbacks (complete and failed) so that
these handlers can be added without providing the ability for MITM
pairing procedures.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Rename auth_err_get to security_err_get which better reflect the
error namespace it converts to. Also update to using the enum definition
instead of uint8_t for local variable holding returned value.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Only the CAVS 1.5 linker script has full support for the coherence
features, don't advertise it on the other SoC's yet.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
A fairly common idiom in our test code is to put test-local data
structures onto the stack, even when they are to be used from another
thread. But stacks are incoherent memory on some platforms, which
means that such things may not get a consistent view of memory between
threads.
Just make these things static. A few of these spots were causing test
failures on intel_adsp_cavs15. More were found by inspection while
hunting for mistakes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some legacy spots in our IPC layer (legally) pass a NULL wait queue to
pend(). Allow this in the coherence assertion.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The poll code uses a dummy wait queue so the threads have something to
block on, but the previous coherence pass (which rearranged things to
put the _poller data elsewhere) missed that this was on the stack,
which is not allowed. It actually has no use except as a list, so
make it a global static instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Thread stack memory on coherence platforms needs to be linked into a
special section (so it can be cached).
Also, the test_idle_stack case just can't work with coherence. It's
measuring the CPU's idle stack's unused data, which was initialized at
boot from CPU0, and not necessarily the CPU on which the test is
running. In practice on intel_adsp_cavs15, our CPU has stale zeroes
in the cache for its unused stack area (presumably from a firmware
memory clear at boot or something?). Making this work would require a
cache invalidate on all CPUs at boot time before the idle threads
start, we can't do it here in the test because we don't know where the
idle stack pointer is.
Too much work for an esoteric stack size test, basically. Just
disable on these platforms.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The z_swap_unlocked() function used a dummy spinlock for simplicity.
But this runs afouls of checking for stack-resident spinlocks
(forbidden when KERNEL_COHERENCE is set). And it's executing needless
code to release the lock anyway. Replace with a compile time NULL,
which will improve performance, correctness and code size.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The CONFIG_KERNEL_COHERENCE framework merged with a typo that left its
validation asserts disabled. But it was written before the "kernel
stacks" feature merged, and so missed the K_KERNEL_STACK_* macros,
which need to put their stacks into __stackmem and not merely
__noinit.
Turning the asserts on exposed the bug.
Fixes#32112
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
when BT_PASSKEY_INVALID was set, it never updated the fixed
passkey which made its use ineffective
Signed-off-by: Faisal Saleem <faisal.saleem@setec.com.au>
The sample was failing twister test with a timeout because there was
no pass/fail criteria for it (nothing was tested). The fix adds
harness on consolse and some output that can be verified.
Signed-off-by: Maciej Perkowski <Maciej.Perkowski@nordicsemi.no>
The sample was failing twister test with a timeout because there was
no pass/fail criteria for it (nothing was tested). The fix adds
harness on consolse and some output that can be verified.
Signed-off-by: Maciej Perkowski <Maciej.Perkowski@nordicsemi.no>
The sample was failing twister test with a timeout because there was
no pass/fail criteria for it (nothing was tested). The fix adds
harness on consolse and some output that can be verified.
Signed-off-by: Maciej Perkowski <Maciej.Perkowski@nordicsemi.no>
This commit fixes sporadic kernel panics when writing big data chunks
to the flash. (data bus errors)
Reference manual:
If an erase operation in Flash memory also concerns data in the data
or instruction cache, you have to make sure that these data are
rewritten before they are accessed during code execution.
If this cannot be done safely, it is recommended to flush the caches
by setting the DCRST and ICRST bits in the Flash access control
register (FLASH_ACR).
Signed-off-by: Alexander Wachter <alexander.wachter@leica-geosystems.com>
Zephyr testcases(not SOF case) not use kernel DSP driver to load image
on ADSP board, thus do not need signing with xman. So add a input
'--no-manifest' to specify signing without xman in image. If use DSP
driver load image, we should not specify this.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
When load firmware by script, the buffer size and data size are not
same, so specify size when copy data to buffer.
Signed-off-by: Jian Kang <jianx.kang@intel.com>
The ATT request buffers are held until the ATT response has been
received. This means that the ATT request buffers are released by the
RX thread, instead of the from the RX priority context of
num_complete.
This can cause a deadlock in the RX thread when we allocate buffers
and all the available buffers are ATT requests, since the RX thread is
the only thread that can release buffers.
Release the ATT request buffers once they have been sent and instead
handle ATT request resending by reconstructing the buffer from the
GATT parameters.
Also re-order the order of resource allocation by allocating the
request context before the buffer. This ensures that we cannot
allocate more buffers for ATT requests than there are ATT requests.
Fixed a buf reference leak that could occur when the ATT request buffer
has been allocated, but GATT returns an error before handing the
responsebility of the buffer to ATT, for example when bt_att_req_alloc
fails.
This is fixed by moving the functionality of att_req_destroy to
bt_att_req_free.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Remove the ATT request destroy callback which is never assigned
by any of the ATT requests.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Fix indicate without func not working properly, when sent as a
non-req by GATT this has two propblems:
- The indicate would not be treated as a transaction, and back
to back indicate would be sent without waiting for the confirm
- The destroy callback would not be called on the indicate parameters
since the indicate_rsp callback would not be called.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
ATT channels do support queueing buffer so it no longer need to block
waiting the tx_sem besides the buffer allocation already serves the
same purpose as the application will not be able to have more requests
than there are buffers available.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Allow to request a higher security level during the key distribution
phase.
This is required by ATT and L2CAP since they only react to the encrypt
change event where they resend the current request.
The current request might require a higher security level still and
might have to request a higher security level before the pairing
procedure has been finished.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
When ATT resends an ATT request it is sent as a "response" instead of
as a request. This causes the ATT request buffer to be released and
the ATT request cannot be resent one more time.
This causes a problem when the ATT request requires authentication
but the elevation of security is not enforcing MITM protection.
In this case the ATT will first require security level 2 and then resend
the request once this has been reached.
This will lead to a new ATT error response and ATT will require security
level L3.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>