samples: drivers: ipm: added IPM over IVSHMEM sample

To demonstrate how to configure Zephyr to use the IPM
driver over the IVSHMEM subsystem. Since this driver
is intended to generate inter QEMU VM notifications
it was better to create a sample app using the shell
for quick demonstration for the users.

Signed-off-by: Felipe Neves <felipe.neves@linaro.org>
This commit is contained in:
Felipe Neves 2023-07-17 09:55:43 -03:00 committed by Fabio Baltieri
commit da3ae1af61
9 changed files with 259 additions and 1 deletions

View file

@ -76,7 +76,7 @@ static int cmd_ivshmem_shmem(const struct shell *sh,
shell_fprintf(sh, SHELL_NORMAL,
"IVshmem up and running: \n"
"\tShared memory: 0x%x of size %u bytes\n"
"\tShared memory: 0x%lx of size %lu bytes\n"
"\tPeer id: %u\n"
"\tNotification vectors: %u\n",
mem, size, id, vectors);

View file

@ -0,0 +1,8 @@
# Copyright (c) 2023 Linaro
# SPDX-License-Identifier: Apache-2.0
cmake_minimum_required(VERSION 3.20.0)
find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
project(ivshmem_ipm_sample)
target_sources(app PRIVATE src/main.c)

View file

@ -0,0 +1,129 @@
IPM over IVSHMEM Driver sample
################################
Prerequisites
*************
* QEMU needs to available.
ivshmem-server needs to be available and running. The server is available in
Zephyr SDK or pre-built in some distributions. Otherwise, it is available in
QEMU source tree.
ivshmem-client needs to be available as it is employed in this sample as an
external application. The same conditions of ivshmem-server apply to the
ivshmem-server, as it is also available via QEMU.
Preparing IVSHMEM server
************************
#. The ivshmem-server utility for QEMU can be found into the Zephyr SDK
directory, in:
``/path/to/your/zephyr-sdk/zephyr-<version>/sysroots/x86_64-pokysdk-linux/usr/xilinx/bin/``
#. You may also find ivshmem-client utility, it can be useful to check if everything works
as expected.
#. Run ivshmem-server. For the ivshmem-server, both number of vectors and
shared memory size are decided at run-time (when the server is executed).
For Zephyr, the number of vectors and shared memory size of ivshmem are
decided at compile-time and run-time, respectively. For Arm64 we use
vectors == 2 for the project configuration in this sample. Here is an example:
.. code-block:: console
# n = number of vectors
$ sudo ivshmem-server -n 2
$ *** Example code, do not use in production ***
#. Appropriately set ownership of ``/dev/shm/ivshmem`` and
``/tmp/ivshmem_socket`` for your deployment scenario. For instance:
.. code-block:: console
$ sudo chgrp $USER /dev/shm/ivshmem
$ sudo chmod 060 /dev/shm/ivshmem
$ sudo chgrp $USER /tmp/ivshmem_socket
$ sudo chmod 060 /tmp/ivshmem_socket
Building and Running
********************
After getting QEMU ready to go, first create two output folders, so open two terminals
and create them, these folders will receive the output of Zephyr west commands:
.. code-block:: console
$ mkdir -p path/to/instance_1
On another terminal window do:
.. code-block:: console
$ mkdir -p path/to/instance_2
Then build the sample as follows, don't forget, two builds are necessary
to test this sample, so append the option ``-d path/to/instance_1`` and
on the other terminal window do the same, that is it ``-d path/to/instance_2``
.. zephyr-app-commands::
:zephyr-app: samples/drivers/ipm/ipm_ivshmem
:board: qemu_cortex_a53
:goals: build
:compact:
To run both QEMU sides, repeat the west build command followed
by ``-d path/to/instance_x`` where x is 1 or 2 depending on the
terminal window, using the run target:
.. zephyr-app-commands::
:zephyr-app: samples/drivers/ipm/ipm_ivshmem
:board: qemu_cortex_a53
:goals: run
:compact:
Expected output
***************
On the console just use the ``ivshmem_ipm_send`` command
followed by the destination peer-id, to get the peer-id destination
go to the other terminal window and check with ``ivshmem`` command:
.. code-block:: console
*** Booting Zephyr OS build zephyr-v3.4.0-974-g7fba7d395750 ***
uart:~$ ivshmem
IVshmem up and running:
Shared memory: 0xafa00000 of size 4194304 bytes
Peer id: 12
Notification vectors: 2
uart:~$
For example one of the instances has the peer-id 12, so go the other
instance and use the command to send the IPM notification followed
by this peer-id:
.. code-block:: console
*** Booting Zephyr OS build zephyr-v3.4.0-974-g7fba7d395750 ***
uart:~$ ivshmem
IVshmem up and running:
Shared memory: 0xafa00000 of size 4194304 bytes
Peer id: 11
Notification vectors: 2
uart:~$ ivshmem_ipm_send 12
Then go back to the other terminal window where user may see the reception
of the notification on the terminal:
.. code-block:: console
uart:~$ ivshmem
IVshmem up and running:
Shared memory: 0xafa00000 of size 4194304 bytes
Peer id: 12
Notification vectors: 2
uart:~$ Received IPM notification over IVSHMEM

View file

@ -0,0 +1,18 @@
/*
* Copyright 2023 Linaro.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/dt-bindings/pcie/pcie.h>
/ {
ivhsmem {
ivshmem0: ivshmem {
compatible = "qemu,ivshmem";
vendor-id = <0x1af4>;
device-id = <0x1110>;
status = "okay";
};
};
};

View file

@ -0,0 +1,16 @@
CONFIG_PCIE_CONTROLLER=y
CONFIG_PCIE_ECAM=y
# Hungry PCI requires at least 256M of virtual space
CONFIG_KERNEL_VM_SIZE=0x80000000
# Hungry PCI requires phys addresses with more than 32 bits
CONFIG_ARM64_VA_BITS_40=y
CONFIG_ARM64_PA_BITS_40=y
# MSI support requires ITS
CONFIG_GIC_V3_ITS=y
# ITS, in turn, requires dynamic memory (9x64 + alignment constrains)
# Additionally, our test also uses malloc
CONFIG_HEAP_MEM_POOL_SIZE=1048576

View file

@ -0,0 +1,14 @@
/*
* Copyright 2023 Linaro.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include "pcie_ivshmem.dtsi"
/ {
ipm_ivshmem0: ipm_ivshmem {
compatible = "linaro,ivshmem-ipm";
ivshmem = <&ivshmem0>;
status = "okay";
};
};

View file

@ -0,0 +1,14 @@
CONFIG_PCIE=y
# required by doorbell
CONFIG_PCIE_MSI=y
CONFIG_PCIE_MSI_X=y
CONFIG_PCIE_MSI_MULTI_VECTOR=y
CONFIG_POLL=y
CONFIG_VIRTUALIZATION=y
CONFIG_IVSHMEM=y
CONFIG_IVSHMEM_DOORBELL=y
CONFIG_SHELL=y
CONFIG_IVSHMEM_SHELL=y
CONFIG_IPM=y

View file

@ -0,0 +1,9 @@
sample:
name: IVSHMEM IPM Sample
tests:
sample.ipm.ipm_ivshmem:
build_only: true
platform_allow: qemu_cortex_a53
tags:
- samples
- ipm

View file

@ -0,0 +1,50 @@
/*
* Copyright 2023 Linaro.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <stdio.h>
#include <stdlib.h>
#include <zephyr/shell/shell.h>
#include <zephyr/drivers/ipm.h>
#include <zephyr/kernel.h>
static void ipm_receive_callback(const struct device *ipmdev, void *user_data,
uint32_t id, volatile void *data)
{
ARG_UNUSED(ipmdev);
ARG_UNUSED(user_data);
printf("Received IPM notification over IVSHMEM\n");
}
int main(void)
{
const struct device *ipm_dev = DEVICE_DT_GET(DT_NODELABEL(ipm_ivshmem0));
ipm_register_callback(ipm_dev, ipm_receive_callback, NULL);
return 0;
}
static int cmd_ipm_send(const struct shell *sh,
size_t argc, char **argv)
{
const struct device *ipm_dev = DEVICE_DT_GET(DT_NODELABEL(ipm_ivshmem0));
int peer_id = strtol(argv[1], NULL, 10);
return ipm_send(ipm_dev, 0, peer_id, NULL, 0);
}
SHELL_STATIC_SUBCMD_SET_CREATE(sub_ivshmem_ipm,
SHELL_CMD_ARG(ivshmem_ipm_send, NULL,
"Send notification to other side using IPM",
cmd_ipm_send, 2, 0),
SHELL_SUBCMD_SET_END);
SHELL_CMD_ARG_REGISTER(ivshmem_ipm_send,
&sub_ivshmem_ipm,
"Send notification to other side using IPM",
cmd_ipm_send, 2, 0);