Commit graph

7 commits

Author SHA1 Message Date
Luis Ubieda
3646b63217 rtio: Add RTIO Work-queues service
Adds ability to process synchronous requests in an asynchronous
fashion, relying on dedicated P4WQ's and pre-allocated work items.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2024-07-09 17:21:05 -04:00
Daniel Leung
c1d9dc4f18 rtio: syscalls: use zephyr_syscall_header
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-06-17 07:57:45 -04:00
Tom Burdick
e4b10328b4 rtio: Use mpsc for submission and completion queue
Rather than the rings, which weren't shared between userspace and kernel
space in Zephyr like they are in Linux with io_uring, use atomic mpsc
queues for submission and completion queues.

Most importantly this removes a potential head of line blocker in the
submission queue as the sqe would be held until a task is completed.

As additional bonuses this avoids some additional locks and restrictions
about what can be submitted and where. It also removes the need for
two executors as all chains/transactions are done concurrently.

Lastly this opens up the possibility for a common pool of sqe's to
allocate from potentially saving lots of memory.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2023-05-10 00:39:43 +09:00
Yuval Peress
dbb470ea7a rtio: Add a managed memory pool for reads
- Introduce a new Kconfig to enable mempool in RTIO
- Introduce a new RTIO_DEFINE_WITH_MEMPOOL to allocate an RTIO context
  with an associated memory pool.
- Add a new sqe read function rtio_sqe_read_with_pool() for memory pool
  enabled RTIO contexts
- Allow IODevs to allocate only the memory they need via rtio_sqe_rx_buf()
- Allow the consumer to get the allocated buffer via
  rtio_cqe_get_mempool_buffer()
- Consumers need to release the buffer via rtio_release_buffer() when
  processing is complete.

Signed-off-by: Yuval Peress <peress@google.com>
2023-04-10 18:34:43 -04:00
Tom Burdick
e3d877f811 rtio: Userspace support
Add support for userspace with RTIO by making rtio and rtio_iodev
k_objects. As well as adding three syscalls for copying in submissions,
copying out completions, and starting tasks with submit.

For the small devices Zephyr typically runs on one of the most important
attributes tends to be low memory usage. To maintain the low footprint of
RTIO and its current executor implementations the rings are not shared with
userspace. Sharing the rings it turns out would require copying submissions
before working with them to avoid TOCTOU issues.

The API could still support shared rings in the future so that a
kernel thread could directly poll, copy, verify, and start the submitted
work. This would require a third executor implementation that maintains its
own copy of submissions similiar to how io_uring in Linux works.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-11-08 10:44:03 +01:00
Tom Burdick
121462b129 rtio: Low (Memory) Cost Concurrent scheduler
Schedules I/O chains in the same order as they arrive providing a fixed
amount of concurrency. The low memory cost comes at the cost of some
computational cost that is likely to be acceptable with small amounts
of concurrency.

The code cost is about 4x higher than the simple linear executor
which isn't entirely unexpected as the logic requirements are quite a bit
more than doing the next thing in the queue.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-06-28 13:53:13 -04:00
Tom Burdick
3d2ead38cb rtio: Real-Time Input/Output Stream
A DMA friendly Stream API for zephyr. Based on ideas from io_uring
and iio, a queue based API for I/O operations.

Provides a pair of fixed length ringbuffer backed queues for submitting
I/O requests and recieving I/O completions. The requests may be chained
together to ensure the next operation does not start until the current
one is complete.

Requests target an abstract rtio_iodev which is expected to wrap all
the hardware particulars of how to perform the operation. For example
with a SPI bus device, a description of what a read, and write mean
can be decided by the iodev wrapping a particular device
hanging off of a SPI controller.

The queue pair are submitted to an executor which may be a simple
inplace looping executor done in the callers execution context
(thread/stack) but other executors are expected. A threadpool executor
might for example allow for concurrent request chains to execute in
parallel. A DMA executor, in conjunction with DMA aware iodevs
would allow for hardware offloading of operations going so far as to
schedule with priority using hardware arbitration.

Both the iodev and executor are definable by a particular
SoC, meaning they can work in conjuction to perform IO operations
using a particular DMA controller or methodology if desired.

The application decides entirely how large the queues are, where
the buffers to read/write come from (some executors
may have particular demands!), and which executor to submit
requests to.

Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
2022-06-28 13:53:13 -04:00