Commit graph

29 commits

Author SHA1 Message Date
Anas Nashif a08bfeb49c syscall: rename Z_OOPS -> K_OOPS
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif 9c4d881183 syscall: rename Z_SYSCALL_ to K_SYSCALL_
Rename internal API to not use z_/Z_.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif df9428991a syscall: Z_SYSCALL_MEMORY_ARRAY -> K_SYSCALL_MEMORY_ARRAY
Rename macros and do not use Z_ for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif 4e396174ce kernel: move syscall_handler.h to internal include directory
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Tom Burdick 9e8d609b5d rtio: Remove unused Kconfigs for executors
There's only one executor now and its always built, no need for these
old crufty Kconfigs.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-10-06 09:18:43 +02:00
Gerard Marull-Paretas 691facc20f include: always use <> for Zephyr includes
Double quotes "" should only be used for local headers.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-09-14 13:49:58 +02:00
Gerard Marull-Paretas fc45b3f2dd rtio: add missing init.h include
File was using init.h API without directly including the header.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-08-29 11:29:37 +01:00
Yuval Peress 10be3a1263 rtio: Implement a NO_RESPONSE flag for SQEs
When added, the SQE's completion will not generate a CQE.
Fixes #59284

Signed-off-by: Yuval Peress <peress@google.com>
2023-06-23 12:31:09 -04:00
Daniel Leung c1d9dc4f18 rtio: syscalls: use zephyr_syscall_header
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-06-17 07:57:45 -04:00
Gerard Marull-Paretas dacb3dbfeb iterable_sections: move to specific header
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-05-22 10:42:30 +02:00
Yuval Peress 7153157f88 rtio: Add support for multishot reads
- Introduce multishot reads which remain on the SQ until canceled

Signed-off-by: Yuval Peress <peress@google.com>
2023-05-15 10:10:12 -04:00
Yuval Peress 2c30920b40 rtio: add cancel support
- Add a new API `rtio_sqe_cancel` to attempt canceling a queued SQE
- Add a new syscall `rtio_sqe_copy_in_get_handles` which allows getting
  back the SQE handles generated by the copy_in operation so that they
  can be canceled.

Signed-off-by: Yuval Peress <peress@google.com>
2023-05-15 10:10:12 -04:00
Tom Burdick ea8930bd78 rtio: Cleanup the various define macros
Reworks the zephyr macros and pools to be objects in their own right. Each
pool can be statically defined with a Z_ private macro. The objects can
then be initialized with an rtio instance statically.

This cleans up a lot of code that was otherwise doing little bits of
management around allocation/freeing and reduces the scope those functions
has to the data it needs.

This should enable sharing the pools of sqe, cqe, and mem blocks among rtio
instances in a future improvement easily.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-05-10 00:39:43 +09:00
Tom Burdick e4b10328b4 rtio: Use mpsc for submission and completion queue
Rather than the rings, which weren't shared between userspace and kernel
space in Zephyr like they are in Linux with io_uring, use atomic mpsc
queues for submission and completion queues.

Most importantly this removes a potential head of line blocker in the
submission queue as the sqe would be held until a task is completed.

As additional bonuses this avoids some additional locks and restrictions
about what can be submitted and where. It also removes the need for
two executors as all chains/transactions are done concurrently.

Lastly this opens up the possibility for a common pool of sqe's to
allocate from potentially saving lots of memory.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-05-10 00:39:43 +09:00
Yuval Peress 3fd0a1508b rtio: fix bug in mempool release API
It was previously assumed that the 'sys_mem_blocks' struct would maintain
information about contiguous blocks allocated so the release API only
took the starting address. This led to an issue where allocating 2+
blocks would end up with a memory leak because any block not being the
first would never be released.

Add the buffer length as an argument so the correct number of blocks can
be released. Also, ammend the tests to match and verify.

Signed-off-by: Yuval Peress <peress@google.com>
2023-05-01 09:26:06 -05:00
Yuval Peress dbb470ea7a rtio: Add a managed memory pool for reads
- Introduce a new Kconfig to enable mempool in RTIO
- Introduce a new RTIO_DEFINE_WITH_MEMPOOL to allocate an RTIO context
  with an associated memory pool.
- Add a new sqe read function rtio_sqe_read_with_pool() for memory pool
  enabled RTIO contexts
- Allow IODevs to allocate only the memory they need via rtio_sqe_rx_buf()
- Allow the consumer to get the allocated buffer via
  rtio_cqe_get_mempool_buffer()
- Consumers need to release the buffer via rtio_release_buffer() when
  processing is complete.

Signed-off-by: Yuval Peress <peress@google.com>
2023-04-10 18:34:43 -04:00
Yuval Peress 80d70b4e96 rtio: Add cqe per each sqe in transaction
Update the policy such that every completed sqe has a parallel cqe.
This has the primary purpose of making any reads in the sqe visible
to the consumer (since they might have different buffers).

Signed-off-by: Yuval Peress <peress@google.com>
2023-04-10 18:34:43 -04:00
Tom Burdick a539d9c904 rtio: Add transceive op
Adds the transceive op which is needed for full SPI support. Notably
in RTIO transceive is expected to be a balanced buffer pair of the same
length and have non-null tx/rx bufs.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-04-03 09:51:02 +02:00
Tom Burdick 912e7ff863 rtio: Add callback op
Adds a callback op to RTIO enabling C logic to be interspersed with
I/O operations. This is not safe from userspace but allowable in kernel
space.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-04-03 09:51:02 +02:00
Tom Burdick bb72809326 rtio: Add tiny write op
When sending small buffers out it makes sense to copy rather than
reference to avoid having to keep the small buffer around for the
lifetime of the write request.

Adjusts the op numbers to always be +1 from the previously defined op
id making it easier to re-arrange if needed in the future.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-04-03 09:51:02 +02:00
Tom Burdick 610d307fa0 rtio: Properly track last sqe in the queue
The pending_sqe logic to track where in the ring queue the concurrent
executor had left off was slightly flawed. It didn't account for starting
all sqes in the queue and ending back up at the beginning.

Instead track the last SQE in the queue, from which the next one in the
queue will the one to start next.

If we happen to sweep the last known SQE in the queue, reset it to NULL
so the next time prepare is called we start at the beginning of the queue
again.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-03-17 12:49:57 -05:00
Tom Burdick 3353342e5f rtio: Add transactional submissions
Transactional submissions treat a sequence of sqes as a single atomic
submission given to a single iodev and expect as a reply a single
completion.

This is useful for scatter gather like APIs that exist in Zephyr already
for I2C and SPI.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-03-17 12:49:57 -05:00
Tom Burdick 3998f9f898 rtio: Shareable lock-free iodevs
By using an mpsc queue for each iodev, the iodev itself is shareable across
contexts. Since its lock free, submits may occur even from an ISR context.

Rather than a fixed size queue, and with it the possibility of running
out of pre-allocated spots, each iodev now holds a wait-free mpsc
queue head.

This changes the parameter of iodev submit to be a struct containing 4
pointers for the rtio context, the submission queue entry, and the mpsc
node for the iodevs submission queue.

This solves the problem involving busy iodevs working with real
devices. For example a busy SPI bus driver could enqueue, without locking,
a request to start once the current request is done.

The queue entries are expected to be owned and allocated by the
executor rather than the iodev. This helps simplify potential
tuning knobs to one place, the RTIO context and its executor an
application directly uses.

As the test case shows iodevs can operate effectively lock free
with the mpsc queue and a single atomic denoting the current task.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-03-03 09:18:09 +01:00
Tom Burdick e3d877f811 rtio: Userspace support
Add support for userspace with RTIO by making rtio and rtio_iodev
k_objects. As well as adding three syscalls for copying in submissions,
copying out completions, and starting tasks with submit.

For the small devices Zephyr typically runs on one of the most important
attributes tends to be low memory usage. To maintain the low footprint of
RTIO and its current executor implementations the rings are not shared with
userspace. Sharing the rings it turns out would require copying submissions
before working with them to avoid TOCTOU issues.

The API could still support shared rings in the future so that a
kernel thread could directly poll, copy, verify, and start the submitted
work. This would require a third executor implementation that maintains its
own copy of submissions similiar to how io_uring in Linux works.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-11-08 10:44:03 +01:00
Tom Burdick 8b6ede6047 rtio: Release sqe before submitting cqe
When an application is waiting on a completion it may be expecting,
rightfully so, that a new submissions slot is available.

Frees the submission queue event prior to enqueuing
the completion queue event in the simple executor.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-10-04 09:31:13 -05:00
Fabio Baltieri 24d09d363c include: fix the remaining legacy #include paths
Add the "zephyr/" prefix to various #include statements that are
preventing the CI form running with LEGACY_INCLUDE_PATH=n.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-08-02 16:41:41 +01:00
Tomislav Milkovic 0fe2c1fe90 everywhere: Fix legacy include paths
Any project with Kconfig option CONFIG_LEGACY_INCLUDE_PATH set to n
couldn't be built because some files were missing zephyr/ prefix in
includes
Re-run the migrate_includes.py script to fix all legacy include paths

Signed-off-by: Tomislav Milkovic <milkovic@byte-lab.com>
2022-07-18 16:16:47 +00:00
Tom Burdick 121462b129 rtio: Low (Memory) Cost Concurrent scheduler
Schedules I/O chains in the same order as they arrive providing a fixed
amount of concurrency. The low memory cost comes at the cost of some
computational cost that is likely to be acceptable with small amounts
of concurrency.

The code cost is about 4x higher than the simple linear executor
which isn't entirely unexpected as the logic requirements are quite a bit
more than doing the next thing in the queue.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-06-28 13:53:13 -04:00
Tom Burdick 3d2ead38cb rtio: Real-Time Input/Output Stream
A DMA friendly Stream API for zephyr. Based on ideas from io_uring
and iio, a queue based API for I/O operations.

Provides a pair of fixed length ringbuffer backed queues for submitting
I/O requests and recieving I/O completions. The requests may be chained
together to ensure the next operation does not start until the current
one is complete.

Requests target an abstract rtio_iodev which is expected to wrap all
the hardware particulars of how to perform the operation. For example
with a SPI bus device, a description of what a read, and write mean
can be decided by the iodev wrapping a particular device
hanging off of a SPI controller.

The queue pair are submitted to an executor which may be a simple
inplace looping executor done in the callers execution context
(thread/stack) but other executors are expected. A threadpool executor
might for example allow for concurrent request chains to execute in
parallel. A DMA executor, in conjunction with DMA aware iodevs
would allow for hardware offloading of operations going so far as to
schedule with priority using hardware arbitration.

Both the iodev and executor are definable by a particular
SoC, meaning they can work in conjuction to perform IO operations
using a particular DMA controller or methodology if desired.

The application decides entirely how large the queues are, where
the buffers to read/write come from (some executors
may have particular demands!), and which executor to submit
requests to.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-06-28 13:53:13 -04:00