diff --git a/doc/reference/usermode/syscalls.rst b/doc/reference/usermode/syscalls.rst index b949cbd34ba..b59dce3e3ca 100644 --- a/doc/reference/usermode/syscalls.rst +++ b/doc/reference/usermode/syscalls.rst @@ -38,8 +38,11 @@ All system calls have the following components: system call. The implementation function may assume that all parameters passed in have been validated if it was invoked from user mode. -* A **handler function**, which wraps the implementation function and does - validation of all the arguments passed in. +* A **verification function**, which wraps the implementation function + and does validation of all the arguments passed in. + +* An **unmarshalling function**, which is an automatically generated + handler that must be included by user source code. C Prototype *********** @@ -122,9 +125,6 @@ the project out directory under ``include/generated/``: which is expressed in ``include/generated/syscall_list.h``. It is the name of the API in uppercase, prefixed with ``K_SYSCALL_``. -* A prototype for the handler function is also created in - ``include/generated/syscall_list.h`` - * An entry for the system call is created in the dispatch table ``_k_sycall_table``, expressed in ``include/generated/syscall_dispatch.c`` @@ -135,6 +135,8 @@ the project out directory under ``include/generated/``: API call, but the sensor subsystem is not enabled, the weak handler will be invoked instead. +* An unmarshalling function is defined in ``include/generated/_mrsh.c`` + The body of the API is created in the generated system header. Using the example of :c:func:`k_sem_init()`, this API is declared in ``include/kernel.h``. At the bottom of ``include/kernel.h`` is:: @@ -143,51 +145,22 @@ example of :c:func:`k_sem_init()`, this API is declared in Inside this header is the body of :c:func:`k_sem_init()`:: - K_SYSCALL_DECLARE3_VOID(K_SYSCALL_K_SEM_INIT, k_sem_init, struct k_sem *, - sem, unsigned int, initial_count, - unsigned int, limit); + static inline void k_sem_init(struct k_sem * sem, unsigned int initial_count, unsigned int limit) + { + #ifdef CONFIG_USERSPACE + if (z_syscall_trap()) { + z_arch_syscall_invoke3(*(u32_t *)&sem, *(u32_t *)&initial_count, *(u32_t *)&limit, K_SYSCALL_K_SEM_INIT); + return; + } + compiler_barrier(); + #endif + z_impl_k_sem_init(sem, initial_count, limit); + } This generates an inline function that takes three arguments with void return value. Depending on context it will either directly call the implementation function or go through a system call elevation. A prototype for the implementation function is also automatically generated. -In this example, the implementation of the :c:macro:`K_SYSCALL_DECLARE3_VOID()` -macro will be:: - - #if !defined(CONFIG_USERSPACE) || defined(__ZEPHYR_SUPERVISOR__) - - #define K_SYSCALL_DECLARE3_VOID(id, name, t0, p0, t1, p1, t2, p2) \ - extern void _impl_##name(t0 p0, t1 p1, t2 p2); \ - static inline void name(t0 p0, t1 p1, t2 p2) \ - { \ - _impl_##name(p0, p1, p2); \ - } - - #elif defined(__ZEPHYR_USER__) - #define K_SYSCALL_DECLARE3_VOID(id, name, t0, p0, t1, p1, t2, p2) \ - static inline void name(t0 p0, t1 p1, t2 p2) \ - { \ - _arch_syscall_invoke3((u32_t)p0, (u32_t)p1, (u32_t)p2, id); \ - } - - #else /* mixed kernel/user macros */ - #define K_SYSCALL_DECLARE3_VOID(id, name, t0, p0, t1, p1, t2, p2) \ - extern void _impl_##name(t0 p0, t1 p1, t2 p2); \ - static inline void name(t0 p0, t1 p1, t2 p2) \ - { \ - if (_is_user_context()) { \ - _arch_syscall_invoke3((u32_t)p0, (u32_t)p1, (u32_t)p2, id); \ - } else { \ - compiler_barrier(); \ - _impl_##name(p0, p1, p2); \ - } \ - } - #endif - -The header containing :c:macro:`K_SYSCALL_DECLARE3_VOID()` is itself -generated due to its repetitive nature and can be found in -``include/generated/syscall_macros.h``. It is created by -``scripts/gen_syscall_header.py``. The final layer is the invocation of the system call itself. All architectures implementing system calls must implement the seven inline functions @@ -197,17 +170,19 @@ necessary privilege elevation. In this layer, all arguments are treated as an unsigned 32-bit type. There is always a 32-bit unsigned return value, which may or may not be used. -Some system calls may have more than six arguments. The number of arguments -passed via registers is fixed at six for all architectures. Additional -arguments will need to be passed in a struct, which needs to be treated as -untrusted memory in the handler function. This is done by the derived -functions :c:func:`_syscall_invoke7` through :c:func:`_syscall_invoke10`. +Some system calls may have more than six arguments. The number of +arguments passed via registers is limited to six for all +architectures. Additional arguments will need to be passed in an array +in the source memory space, which needs to be treated as untrusted +memory in the handler function. This code (packing, unpacking and +validation) is generated automatically as needed in the stub above and +in the unmarshalling function. -Some system calls may return a value that will not fit in a 32-bit register, -such as APIs that return a 64-bit value. In this scenario, the return value is -populated in a memory buffer that is passed in as an argument. For example, -see the implementation of :c:func:`_syscall_ret64_invoke0` and -:c:func:`_syscall_ret64_invoke1`. +Some system calls may return a value that will not fit in a 32-bit +register, such as APIs that return a 64-bit value. In this scenario, +the return value is populated in a **untrusted** memory buffer that is +passed in as a final argument. Likewise, this code is generated +automatically. Implementation Function *********************** @@ -312,159 +287,32 @@ If any check fails, the macros will return a nonzero value. The macro calling thread. This is done instead of returning some error condition to keep the APIs the same when calling from supervisor mode. -Handler Declaration +Verifier Definition =================== -All handler functions have the same prototype: +All system calls are dispatched to a verifier function with a prefixed +``z_vrfy_`` name based on the system call. They have exactly the same +return type and argument types as the wrapped system call. Their job +is to execute the system call (generally by calling the implementation +function) after having validated all arguments. + +The verifier is itself invoked by an automatically generated +unmarshaller function which takes care of unpacking the register +arguments from the architecture layer and casting them to the correct +type. This is defined in a header file that must be included from +user code, generally somewhere after the definition of the verifier in +a translation unit (so that it can be inlined). + +For example: .. code-block:: c - u32_t _handler_(u32_t arg1, u32_t arg2, u32_t arg3, - u32_t arg4, u32_t arg5, u32_t arg6, void *ssf) - -All handlers return a value. Handlers are passed exactly six arguments, which -were sent from user mode to the kernel via registers in the -architecture-specific system call implementation, plus an opaque context -pointer which indicates the system state when the system call was invoked from -user code. - -To simplify the prototype, the variadic :c:macro:`Z_SYSCALL_HANDLER()` macro -should be used to declare the handler name and names of each argument. Type -information is not necessary since all arguments and the return value are -:c:type:`u32_t`. Using :c:func:`k_sem_init()` as an example: - -.. code-block:: c - - Z_SYSCALL_HANDLER(k_sem_init, sem, initial_count, limit) + static int z_vrfy_k_sem_take(struct k_sem *sem, s32_t timeout) { - ... - } - -After validating all the arguments, the handler function needs to then call -the implementation function. If the implementation function returns a value, -this needs to be returned by the handler, otherwise the handler should return -0. - -.. note:: Do not forget that all the arguments to the handler are passed in as - unsigned 32-bit values. If checks are needed on parameters that are - actually signed values, casts may be needed in order for these checks to - be performed properly. - -Using :c:func:`k_sem_init()` as an example again, we need to enforce that the -semaphore object passed in is a valid semaphore object (but not necessarily -initialized), and that the limit parameter is nonzero: - -.. code-block:: c - - Z_SYSCALL_HANDLER(k_sem_init, sem, initial_count, limit) - { - Z_OOPS(Z_SYSCALL_OBJ_INIT(sem, K_OBJ_SEM)); - Z_OOPS(Z_SYSCALL_VERIFY(limit != 0)); - _impl_k_sem_init((struct k_sem *)sem, initial_count, limit); - return 0; - } - -Simple Handler Declarations ---------------------------- - -Many kernel or driver APIs have very simple handler functions, where they -either accept no arguments, or take one object which is a kernel object -pointer of some specific type. Some special macros have been defined for -these simple cases, with variants depending on whether the API has a return -value: - -* :c:macro:`Z_SYSCALL_HANDLER1_SIMPLE()` one kernel object argument, returns - a value -* :c:macro:`Z_SYSCALL_HANDLER1_SIMPLE_VOID()` one kernel object argument, - no return value -* :c:macro:`Z_SYSCALL_HANDLER0_SIMPLE()` no arguments, returns a value -* :c:macro:`Z_SYSCALL_HANDLER0_SIMPLE_VOID()` no arguments, no return value - -For example, :c:func:`k_sem_count_get()` takes a semaphore object as its -only argument and returns a value, so its handler can be completely expressed -as: - -.. code-block:: c - - Z_SYSCALL_HANDLER1_SIMPLE(k_sem_count_get, K_OBJ_SEM, struct k_sem *); - -System Calls With 6 Or More Arguments -===================================== - -System calls may have more than six arguments, however the number of arguments -passed in via registers when the privilege elevation is invoked is fixed at six -for all architectures. In this case, the sixth and subsequent arguments to the -system call are placed into a struct, and a pointer to that struct is passed to -the handler as its sixth argument. - -See ``include/syscall.h`` to see how this is done; the struct passed in must be -validated like any other memory buffer. For example, for a system call -with nine arguments, arguments 6 through 9 will be passed in via struct, which -must be verified since memory pointers from user mode can be incorrect or -malicious: - -.. code-block:: c - - Z_SYSCALL_HANDLER(k_foo, arg1, arg2, arg3, arg4, arg5, more_args_ptr) - { - struct _syscall_9_args *margs = (struct _syscall_9_args *)more_args_ptr; - - Z_OOPS(Z_SYSCALL_MEMORY_READ(margs, sizeof(*margs))); - - ... - - } - -It is also very important to note that arguments passed in this way can change -at any time due to concurrent access to the argument struct. If any parameters -are subject to enforcement checks, they need to be copied out of the struct and -only then checked. One way to ensure this isn't optimized out is to declare the -argument struct as ``volatile``, and copy values out of it into local variables -before checking. Using the previous example: - -.. code-block:: c - - Z_SYSCALL_HANDLER(k_foo, arg1, arg2, arg3, arg4, arg5, more_args_ptr) - { - volatile struct _syscall_9_args *margs = - (struct _syscall_9_args *)more_args_ptr; - int arg8; - - Z_OOPS(Z_SYSCALL_MEMORY_READ(margs, sizeof(*margs))); - arg8 = margs->arg8; - Z_OOPS(Z_SYSCALL_VERIFY_MSG(arg8 < 12, "arg8 must be less than 12")); - - _impl_k_foo(arg1, arg2, arg3, arg3, arg4, arg5, margs->arg6, - margs->arg7, arg8, margs->arg9); - return 0; - } - - -System Calls With 64-bit Return Value -===================================== - -If a system call has a return value larger than 32-bits, the handler will not -return anything. Instead, a pointer to a sufficient memory region for the -return value will be passed in as an additional argument. As an example, we -have the system call for getting the current system uptime: - -.. code-block:: c - - __syscall s64_t k_uptime_get(void); - -The handler function has the return area passed in as a pointer, which must -be validated as writable by the calling thread: - -.. code-block:: c - - Z_SYSCALL_HANDLER(k_uptime_get, ret_p) - { - s64_t *ret = (s64_t *)ret_p; - - Z_OOPS(Z_SYSCALL_MEMORY_WRITE(ret, sizeof(*ret))); - *ret = _impl_k_uptime_get(); - return 0; + Z_OOPS(Z_SYSCALL_OBJ(sem, K_OBJ_SEM)); + return z_impl_k_sem_take(sem, timeout); } + #include Configuration Options ********************* @@ -479,11 +327,6 @@ APIs Helper macros for creating system call handlers are provided in :zephyr_file:`kernel/include/syscall_handler.h`: -* :c:macro:`Z_SYSCALL_HANDLER()` -* :c:macro:`Z_SYSCALL_HANDLER1_SIMPLE()` -* :c:macro:`Z_SYSCALL_HANDLER1_SIMPLE_VOID()` -* :c:macro:`Z_SYSCALL_HANDLER0_SIMPLE()` -* :c:macro:`Z_SYSCALL_HANDLER0_SIMPLE_VOID()` * :c:macro:`Z_SYSCALL_OBJ()` * :c:macro:`Z_SYSCALL_OBJ_INIT()` * :c:macro:`Z_SYSCALL_OBJ_NEVER_INIT()` @@ -505,10 +348,4 @@ Functions for invoking system calls are defined in * :c:func:`_arch_syscall_invoke4` * :c:func:`_arch_syscall_invoke5` * :c:func:`_arch_syscall_invoke6` -* :c:func:`_syscall_invoke7` -* :c:func:`_syscall_invoke8` -* :c:func:`_syscall_invoke9` -* :c:func:`_syscall_invoke10` -* :c:func:`_syscall_ret64_invoke0` -* :c:func:`_syscall_ret64_invoke1` diff --git a/drivers/ptp_clock/ptp_clock.c b/drivers/ptp_clock/ptp_clock.c index ce9ce02561b..5c81d56be25 100644 --- a/drivers/ptp_clock/ptp_clock.c +++ b/drivers/ptp_clock/ptp_clock.c @@ -8,7 +8,8 @@ #include #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(ptp_clock_get, dev, tm) +int z_vrfy_ptp_clock_get(struct device *dev, + struct net_ptp_time *tm) { struct net_ptp_time ptp_time; int ret; @@ -25,6 +26,7 @@ Z_SYSCALL_HANDLER(ptp_clock_get, dev, tm) return 0; } - return (u32_t)ret; + return ret; } +#include #endif /* CONFIG_USERSPACE */ diff --git a/include/syscall.h b/include/syscall.h index cd27a833540..613c39c74db 100644 --- a/include/syscall.h +++ b/include/syscall.h @@ -34,44 +34,17 @@ extern "C" { * - Mixed or indeterminate code, these inlines will do a runtime check * to determine what course of action is needed. * - * All system calls require a handler function and an implementation function. - * These must follow a naming convention. For a system call named k_foo(): + * All system calls require a verifier function and an implementation + * function. These must follow a naming convention. For a system call + * named k_foo(): * - * - The handler function will be named z_hdlr_k_foo(). Handler functions - * are always of type _k_syscall_handler_t, verify arguments passed up - * from userspace, and call the implementation function. See - * documentation for that typedef for more information. - * - The implementation function will be named z_impl_k_foo(). This is the - * actual implementation of the system call. - * - * The basic declartion macros are as follows. System calls with 0 to 10 - * parameters are supported. For a system call with N parameters, that returns - * a value and is* not implemented inline, the macro is as follows (N noted - * as {N} for clarity): - * - * K_SYSCALL_DECLARE{N}(id, name, ret, t0, p0, ... , t{N-1}, p{N-1}) - - * @param id System call ID, one of K_SYSCALL_* defines - * @param name Symbol name of the system call used to invoke it - * @param ret Data type of return value - * @param tX Data type of parameter X - * @param pX Name of parameter x - * - * For system calls that return no value: - * - * K_SYSCALL_DECLARE{n}_VOID(id, name, t0, p0, .... , t{N-1}, p{N-1}) - * - * This is identical to above except there is no 'ret' parameter. - * - * For system calls where the implementation is an inline function, we have - * - * K_SYSCALL_DECLARE{n}_INLINE(id, name, ret, t0, p0, ... , t{N-1}, p{N-1}) - * K_SYSCALL_DECLARE{n}_VOID_INLINE(id, name, t0, p0, ... , t{N-1}, p{N-1}) - * - * These are used in the same way as their non-INLINE counterparts. - * - * These macros are generated by scripts/gen_syscall_header.py and can be - * found in $OUTDIR/include/generated/syscall_macros.h + * - The handler function will be named z_vrfy_k_foo(). Handler + * functions have the same type signature as the wrapped call, + * verify arguments passed up from userspace, and call the + * implementation function. See documentation for that typedef for + * more information. - The implementation function will be named + * z_impl_k_foo(). This is the actual implementation of the system + * call. */ /** @@ -115,36 +88,6 @@ typedef u32_t (*_k_syscall_handler_t)(u32_t arg1, u32_t arg2, u32_t arg3, void *ssf); #ifdef CONFIG_USERSPACE -/* - * Helper data structures for system calls with large argument lists - */ - -struct _syscall_7_args { - u32_t arg6; - u32_t arg7; -}; - -struct _syscall_8_args { - u32_t arg6; - u32_t arg7; - u32_t arg8; -}; - -struct _syscall_9_args { - u32_t arg6; - u32_t arg7; - u32_t arg8; - u32_t arg9; -}; - -struct _syscall_10_args { - u32_t arg6; - u32_t arg7; - u32_t arg8; - u32_t arg9; - u32_t arg10; -}; - /* * Interfaces for invoking system calls */ @@ -170,98 +113,36 @@ static inline u32_t z_arch_syscall_invoke6(u32_t arg1, u32_t arg2, u32_t arg3, u32_t arg4, u32_t arg5, u32_t arg6, u32_t call_id); -static inline u32_t z_syscall_invoke7(u32_t arg1, u32_t arg2, u32_t arg3, - u32_t arg4, u32_t arg5, u32_t arg6, - u32_t arg7, u32_t call_id) { - struct _syscall_7_args args = { - .arg6 = arg6, - .arg7 = arg7, - }; - - return z_arch_syscall_invoke6(arg1, arg2, arg3, arg4, arg5, (u32_t)&args, - call_id); -} - -static inline u32_t z_syscall_invoke8(u32_t arg1, u32_t arg2, u32_t arg3, - u32_t arg4, u32_t arg5, u32_t arg6, - u32_t arg7, u32_t arg8, u32_t call_id) -{ - struct _syscall_8_args args = { - .arg6 = arg6, - .arg7 = arg7, - .arg8 = arg8, - }; - - return z_arch_syscall_invoke6(arg1, arg2, arg3, arg4, arg5, (u32_t)&args, - call_id); -} - -static inline u32_t z_syscall_invoke9(u32_t arg1, u32_t arg2, u32_t arg3, - u32_t arg4, u32_t arg5, u32_t arg6, - u32_t arg7, u32_t arg8, u32_t arg9, - u32_t call_id) -{ - struct _syscall_9_args args = { - .arg6 = arg6, - .arg7 = arg7, - .arg8 = arg8, - .arg9 = arg9, - }; - - return z_arch_syscall_invoke6(arg1, arg2, arg3, arg4, arg5, (u32_t)&args, - call_id); -} - -static inline u32_t z_syscall_invoke10(u32_t arg1, u32_t arg2, u32_t arg3, - u32_t arg4, u32_t arg5, u32_t arg6, - u32_t arg7, u32_t arg8, u32_t arg9, - u32_t arg10, u32_t call_id) -{ - struct _syscall_10_args args = { - .arg6 = arg6, - .arg7 = arg7, - .arg8 = arg8, - .arg9 = arg9, - .arg10 = arg10 - }; - - return z_arch_syscall_invoke6(arg1, arg2, arg3, arg4, arg5, (u32_t)&args, - call_id); -} - -static inline u64_t z_syscall_ret64_invoke0(u32_t call_id) -{ - u64_t ret; - - (void)z_arch_syscall_invoke1((u32_t)&ret, call_id); - return ret; -} - -static inline u64_t z_syscall_ret64_invoke1(u32_t arg1, u32_t call_id) -{ - u64_t ret; - - (void)z_arch_syscall_invoke2(arg1, (u32_t)&ret, call_id); - return ret; -} - -static inline u64_t z_syscall_ret64_invoke2(u32_t arg1, u32_t arg2, - u32_t call_id) -{ - u64_t ret; - - (void)z_arch_syscall_invoke3(arg1, arg2, (u32_t)&ret, call_id); - return ret; -} +#endif /* CONFIG_USERSPACE */ /** * Indicate whether we are currently running in user mode * * @return true if the CPU is currently running with user permissions */ +#ifdef CONFIG_USERSPACE static inline bool z_arch_is_user_context(void); +#else +#define z_arch_is_user_context() (true) +#endif -#endif /* CONFIG_USERSPACE */ +/* True if a syscall function must trap to the kernel, usually a + * compile-time decision. + */ +static ALWAYS_INLINE bool z_syscall_trap(void) +{ + bool ret = false; +#ifdef CONFIG_USERSPACE +#if defined(__ZEPHYR_SUPERVISOR__) + ret = false; +#elif defined(__ZEPHYR_USER__) + ret = true; +#else + ret = z_arch_is_user_context(); +#endif +#endif + return ret; +} /** * Indicate whether the CPU is currently in user mode diff --git a/kernel/device.c b/kernel/device.c index 496dd39d6a9..af517698b29 100644 --- a/kernel/device.c +++ b/kernel/device.c @@ -95,7 +95,7 @@ struct device *z_impl_device_get_binding(const char *name) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(device_get_binding, name) +static inline struct device *z_vrfy_device_get_binding(const char *name) { char name_copy[Z_DEVICE_MAX_NAME_LEN]; @@ -104,8 +104,9 @@ Z_SYSCALL_HANDLER(device_get_binding, name) return 0; } - return (u32_t)z_impl_device_get_binding(name_copy); + return z_impl_device_get_binding(name_copy); } +#include #endif /* CONFIG_USERSPACE */ #ifdef CONFIG_DEVICE_POWER_MANAGEMENT diff --git a/kernel/errno.c b/kernel/errno.c index 3919600d5ff..e23593785fc 100644 --- a/kernel/errno.c +++ b/kernel/errno.c @@ -32,7 +32,12 @@ int *z_impl_z_errno(void) return &_current->userspace_local_data->errno_var; } -Z_SYSCALL_HANDLER0_SIMPLE(z_errno); +static inline int *z_vrfy_z_errno(void) +{ + return z_impl_z_errno(); +} +#include + #else int *z_impl_z_errno(void) { diff --git a/kernel/futex.c b/kernel/futex.c index e1c88d0a796..0a3cd86cb5d 100644 --- a/kernel/futex.c +++ b/kernel/futex.c @@ -52,14 +52,15 @@ int z_impl_k_futex_wake(struct k_futex *futex, bool wake_all) return woken; } -Z_SYSCALL_HANDLER(k_futex_wake, futex, wake_all) +static inline int z_vrfy_k_futex_wake(struct k_futex *futex, bool wake_all) { if (Z_SYSCALL_MEMORY_WRITE(futex, sizeof(struct k_futex)) != 0) { return -EACCES; } - return z_impl_k_futex_wake((struct k_futex *)futex, (bool)wake_all); + return z_impl_k_futex_wake(futex, wake_all); } +#include int z_impl_k_futex_wait(struct k_futex *futex, int expected, s32_t timeout) { @@ -88,12 +89,12 @@ int z_impl_k_futex_wait(struct k_futex *futex, int expected, s32_t timeout) return ret; } -Z_SYSCALL_HANDLER(k_futex_wait, futex, expected, timeout) +static inline int z_vrfy_k_futex_wait(struct k_futex *futex, int expected, s32_t timeout) { if (Z_SYSCALL_MEMORY_WRITE(futex, sizeof(struct k_futex)) != 0) { return -EACCES; } - return z_impl_k_futex_wait((struct k_futex *)futex, - expected, (s32_t)timeout); + return z_impl_k_futex_wait(futex, expected, timeout); } +#include diff --git a/kernel/include/kernel_structs.h b/kernel/include/kernel_structs.h index 78a722d36ba..49fd1e2821a 100644 --- a/kernel/include/kernel_structs.h +++ b/kernel/include/kernel_structs.h @@ -99,6 +99,11 @@ struct _cpu { /* one assigned idle thread per CPU */ struct k_thread *idle_thread; +#ifdef CONFIG_USERSPACE + /* current syscall frame pointer */ + void *syscall_frame; +#endif + #ifdef CONFIG_TIMESLICING /* number of ticks remaining in current time slice */ int slice_ticks; diff --git a/kernel/include/syscall_handler.h b/kernel/include/syscall_handler.h index d5e3b626aee..377c2dfef56 100644 --- a/kernel/include/syscall_handler.h +++ b/kernel/include/syscall_handler.h @@ -15,6 +15,7 @@ #include #include #include +#include #include extern const _k_syscall_handler_t _k_syscall_table[K_SYSCALL_LIMIT]; @@ -259,7 +260,7 @@ extern int z_user_string_copy(char *dst, const char *src, size_t maxlen); #define Z_OOPS(expr) \ do { \ if (expr) { \ - z_arch_syscall_oops(ssf); \ + z_arch_syscall_oops(_current_cpu->syscall_frame); \ } \ } while (false) @@ -507,129 +508,6 @@ static inline int z_obj_validation_check(struct _k_object *ko, #define Z_SYSCALL_OBJ_NEVER_INIT(ptr, type) \ Z_SYSCALL_IS_OBJ(ptr, type, _OBJ_INIT_FALSE) -/* - * Handler definition macros - * - * All handlers have the same prototype: - * - * u32_t _handler_APINAME(u32_t arg1, u32_t arg2, u32_t arg3, - * u32_t arg4, u32_t arg5, u32_t arg6, void *ssf); - * - * These make it much simpler to define handlers instead of typing out - * the bolierplate. The macros ensure that the seventh argument is named - * "ssf" as this is now referenced by various other Z_SYSCALL macros. - * - * Use the Z_SYSCALL_HANDLER(name_, arg1, ..., arg6) variant, as it will - * automatically deduce the correct version of Z__SYSCALL_HANDLERn() to - * use depending on the number of arguments. - */ - -#define Z__SYSCALL_HANDLER0(name_) \ - u32_t z_hdlr_ ## name_(u32_t arg1 __unused, \ - u32_t arg2 __unused, \ - u32_t arg3 __unused, \ - u32_t arg4 __unused, \ - u32_t arg5 __unused, \ - u32_t arg6 __unused, \ - void *ssf) - -#define Z__SYSCALL_HANDLER1(name_, arg1_) \ - u32_t z_hdlr_ ## name_(u32_t arg1_, \ - u32_t arg2 __unused, \ - u32_t arg3 __unused, \ - u32_t arg4 __unused, \ - u32_t arg5 __unused, \ - u32_t arg6 __unused, \ - void *ssf) - -#define Z__SYSCALL_HANDLER2(name_, arg1_, arg2_) \ - u32_t z_hdlr_ ## name_(u32_t arg1_, \ - u32_t arg2_, \ - u32_t arg3 __unused, \ - u32_t arg4 __unused, \ - u32_t arg5 __unused, \ - u32_t arg6 __unused, \ - void *ssf) - -#define Z__SYSCALL_HANDLER3(name_, arg1_, arg2_, arg3_) \ - u32_t z_hdlr_ ## name_(u32_t arg1_, \ - u32_t arg2_, \ - u32_t arg3_, \ - u32_t arg4 __unused, \ - u32_t arg5 __unused, \ - u32_t arg6 __unused, \ - void *ssf) - -#define Z__SYSCALL_HANDLER4(name_, arg1_, arg2_, arg3_, arg4_) \ - u32_t z_hdlr_ ## name_(u32_t arg1_, \ - u32_t arg2_, \ - u32_t arg3_, \ - u32_t arg4_, \ - u32_t arg5 __unused, \ - u32_t arg6 __unused, \ - void *ssf) - -#define Z__SYSCALL_HANDLER5(name_, arg1_, arg2_, arg3_, arg4_, arg5_) \ - u32_t z_hdlr_ ## name_(u32_t arg1_, \ - u32_t arg2_, \ - u32_t arg3_, \ - u32_t arg4_, \ - u32_t arg5_, \ - u32_t arg6 __unused, \ - void *ssf) - -#define Z__SYSCALL_HANDLER6(name_, arg1_, arg2_, arg3_, arg4_, arg5_, arg6_) \ - u32_t z_hdlr_ ## name_(u32_t arg1_, \ - u32_t arg2_, \ - u32_t arg3_, \ - u32_t arg4_, \ - u32_t arg5_, \ - u32_t arg6_, \ - void *ssf) - -#define Z_SYSCALL_CONCAT(arg1, arg2) Z__SYSCALL_CONCAT(arg1, arg2) -#define Z__SYSCALL_CONCAT(arg1, arg2) Z___SYSCALL_CONCAT(arg1, arg2) -#define Z___SYSCALL_CONCAT(arg1, arg2) arg1##arg2 - -#define Z_SYSCALL_NARG(...) Z__SYSCALL_NARG(__VA_ARGS__, Z__SYSCALL_RSEQ_N()) -#define Z__SYSCALL_NARG(...) Z__SYSCALL_ARG_N(__VA_ARGS__) -#define Z__SYSCALL_ARG_N(_1, _2, _3, _4, _5, _6, _7, N, ...) N -#define Z__SYSCALL_RSEQ_N() 6, 5, 4, 3, 2, 1, 0 - -#define Z_SYSCALL_HANDLER(...) \ - Z_SYSCALL_CONCAT(Z__SYSCALL_HANDLER, \ - Z_SYSCALL_NARG(__VA_ARGS__))(__VA_ARGS__) - -/* - * Helper macros for a very common case: calls which just take one argument - * which is an initialized kernel object of a specific type. Verify the object - * and call the implementation. - */ - -#define Z_SYSCALL_HANDLER1_SIMPLE(name_, obj_enum_, obj_type_) \ - Z__SYSCALL_HANDLER1(name_, arg1) { \ - Z_OOPS(Z_SYSCALL_OBJ(arg1, obj_enum_)); \ - return (u32_t)z_impl_ ## name_((obj_type_)arg1); \ - } - -#define Z_SYSCALL_HANDLER1_SIMPLE_VOID(name_, obj_enum_, obj_type_) \ - Z__SYSCALL_HANDLER1(name_, arg1) { \ - Z_OOPS(Z_SYSCALL_OBJ(arg1, obj_enum_)); \ - z_impl_ ## name_((obj_type_)arg1); \ - return 0; \ - } - -#define Z_SYSCALL_HANDLER0_SIMPLE(name_) \ - Z__SYSCALL_HANDLER0(name_) { \ - return (u32_t)z_impl_ ## name_(); \ - } - -#define Z_SYSCALL_HANDLER0_SIMPLE_VOID(name_) \ - Z__SYSCALL_HANDLER0(name_) { \ - z_impl_ ## name_(); \ - return 0; \ - } - #include #endif /* _ASMLANGUAGE */ diff --git a/kernel/msg_q.c b/kernel/msg_q.c index 160d789ccaf..9bb2a59a88f 100644 --- a/kernel/msg_q.c +++ b/kernel/msg_q.c @@ -87,12 +87,14 @@ int z_impl_k_msgq_alloc_init(struct k_msgq *msgq, size_t msg_size, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_msgq_alloc_init, q, msg_size, max_msgs) +int z_vrfy_k_msgq_alloc_init(struct k_msgq *q, size_t msg_size, + u32_t max_msgs) { Z_OOPS(Z_SYSCALL_OBJ_NEVER_INIT(q, K_OBJ_MSGQ)); - return z_impl_k_msgq_alloc_init((struct k_msgq *)q, msg_size, max_msgs); + return z_impl_k_msgq_alloc_init(q, msg_size, max_msgs); } +#include #endif void k_msgq_cleanup(struct k_msgq *msgq) @@ -153,15 +155,14 @@ int z_impl_k_msgq_put(struct k_msgq *msgq, void *data, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_msgq_put, msgq_p, data, timeout) +static inline int z_vrfy_k_msgq_put(struct k_msgq *q, void *data, s32_t timeout) { - struct k_msgq *q = (struct k_msgq *)msgq_p; - Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); Z_OOPS(Z_SYSCALL_MEMORY_READ(data, q->msg_size)); - return z_impl_k_msgq_put(q, (void *)data, timeout); + return z_impl_k_msgq_put(q, data, timeout); } +#include #endif void z_impl_k_msgq_get_attrs(struct k_msgq *msgq, struct k_msgq_attrs *attrs) @@ -172,15 +173,13 @@ void z_impl_k_msgq_get_attrs(struct k_msgq *msgq, struct k_msgq_attrs *attrs) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_msgq_get_attrs, msgq_p, attrs) +static inline void z_vrfy_k_msgq_get_attrs(struct k_msgq *q, struct k_msgq_attrs *attrs) { - struct k_msgq *q = (struct k_msgq *)msgq_p; - Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(attrs, sizeof(struct k_msgq_attrs))); - z_impl_k_msgq_get_attrs(q, (struct k_msgq_attrs *) attrs); - return 0; + z_impl_k_msgq_get_attrs(q, attrs); } +#include #endif int z_impl_k_msgq_get(struct k_msgq *msgq, void *data, s32_t timeout) @@ -236,15 +235,14 @@ int z_impl_k_msgq_get(struct k_msgq *msgq, void *data, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_msgq_get, msgq_p, data, timeout) +static inline int z_vrfy_k_msgq_get(struct k_msgq *q, void *data, s32_t timeout) { - struct k_msgq *q = (struct k_msgq *)msgq_p; - Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(data, q->msg_size)); - return z_impl_k_msgq_get(q, (void *)data, timeout); + return z_impl_k_msgq_get(q, data, timeout); } +#include #endif int z_impl_k_msgq_peek(struct k_msgq *msgq, void *data) @@ -269,15 +267,14 @@ int z_impl_k_msgq_peek(struct k_msgq *msgq, void *data) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_msgq_peek, msgq_p, data) +static inline int z_vrfy_k_msgq_peek(struct k_msgq *q, void *data) { - struct k_msgq *q = (struct k_msgq *)msgq_p; - Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(data, q->msg_size)); - return z_impl_k_msgq_peek(q, (void *)data); + return z_impl_k_msgq_peek(q, data); } +#include #endif void z_impl_k_msgq_purge(struct k_msgq *msgq) @@ -300,7 +297,25 @@ void z_impl_k_msgq_purge(struct k_msgq *msgq) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_msgq_purge, K_OBJ_MSGQ, struct k_msgq *); -Z_SYSCALL_HANDLER1_SIMPLE(k_msgq_num_free_get, K_OBJ_MSGQ, struct k_msgq *); -Z_SYSCALL_HANDLER1_SIMPLE(k_msgq_num_used_get, K_OBJ_MSGQ, struct k_msgq *); +static inline void z_vrfy_k_msgq_purge(struct k_msgq *q) +{ + Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); + z_impl_k_msgq_purge(q); +} +#include + +static inline u32_t z_vrfy_k_msgq_num_free_get(struct k_msgq *q) +{ + Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); + return z_impl_k_msgq_num_free_get(q); +} +#include + +static inline u32_t z_vrfy_k_msgq_num_used_get(struct k_msgq *q) +{ + Z_OOPS(Z_SYSCALL_OBJ(q, K_OBJ_MSGQ)); + return z_impl_k_msgq_num_used_get(q); +} +#include + #endif diff --git a/kernel/mutex.c b/kernel/mutex.c index 5590e8f1871..c270cca93db 100644 --- a/kernel/mutex.c +++ b/kernel/mutex.c @@ -81,13 +81,12 @@ void z_impl_k_mutex_init(struct k_mutex *mutex) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_mutex_init, mutex) +static inline void z_vrfy_k_mutex_init(struct k_mutex *mutex) { Z_OOPS(Z_SYSCALL_OBJ_INIT(mutex, K_OBJ_MUTEX)); - z_impl_k_mutex_init((struct k_mutex *)mutex); - - return 0; + z_impl_k_mutex_init(mutex); } +#include #endif static s32_t new_prio_for_inheritance(s32_t target, s32_t limit) @@ -196,11 +195,12 @@ int z_impl_k_mutex_lock(struct k_mutex *mutex, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_mutex_lock, mutex, timeout) +static inline int z_vrfy_k_mutex_lock(struct k_mutex *mutex, s32_t timeout) { Z_OOPS(Z_SYSCALL_OBJ(mutex, K_OBJ_MUTEX)); - return z_impl_k_mutex_lock((struct k_mutex *)mutex, (s32_t)timeout); + return z_impl_k_mutex_lock(mutex, timeout); } +#include #endif void z_impl_k_mutex_unlock(struct k_mutex *mutex) @@ -255,12 +255,12 @@ k_mutex_unlock_return: } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_mutex_unlock, mutex) +static inline void z_vrfy_k_mutex_unlock(struct k_mutex *mutex) { Z_OOPS(Z_SYSCALL_OBJ(mutex, K_OBJ_MUTEX)); - Z_OOPS(Z_SYSCALL_VERIFY(((struct k_mutex *)mutex)->lock_count > 0)); - Z_OOPS(Z_SYSCALL_VERIFY(((struct k_mutex *)mutex)->owner == _current)); - z_impl_k_mutex_unlock((struct k_mutex *)mutex); - return 0; + Z_OOPS(Z_SYSCALL_VERIFY(mutex->lock_count > 0)); + Z_OOPS(Z_SYSCALL_VERIFY(mutex->owner == _current)); + z_impl_k_mutex_unlock(mutex); } +#include #endif diff --git a/kernel/pipes.c b/kernel/pipes.c index dbea28f6596..83a1c5b1461 100644 --- a/kernel/pipes.c +++ b/kernel/pipes.c @@ -164,12 +164,13 @@ int z_impl_k_pipe_alloc_init(struct k_pipe *pipe, size_t size) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_pipe_alloc_init, pipe, size) +static inline int z_vrfy_k_pipe_alloc_init(struct k_pipe *pipe, size_t size) { Z_OOPS(Z_SYSCALL_OBJ_NEVER_INIT(pipe, K_OBJ_PIPE)); - return z_impl_k_pipe_alloc_init((struct k_pipe *)pipe, size); + return z_impl_k_pipe_alloc_init(pipe, size); } +#include #endif void k_pipe_cleanup(struct k_pipe *pipe) @@ -710,12 +711,9 @@ int z_impl_k_pipe_get(struct k_pipe *pipe, void *data, size_t bytes_to_read, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_pipe_get, - pipe, data, bytes_to_read, bytes_read_p, min_xfer_p, timeout) +int z_vrfy_k_pipe_get(struct k_pipe *pipe, void *data, size_t bytes_to_read, + size_t *bytes_read, size_t min_xfer, s32_t timeout) { - size_t *bytes_read = (size_t *)bytes_read_p; - size_t min_xfer = (size_t)min_xfer_p; - Z_OOPS(Z_SYSCALL_OBJ(pipe, K_OBJ_PIPE)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(bytes_read, sizeof(*bytes_read))); Z_OOPS(Z_SYSCALL_MEMORY_WRITE((void *)data, bytes_to_read)); @@ -725,6 +723,7 @@ Z_SYSCALL_HANDLER(k_pipe_get, bytes_to_read, bytes_read, min_xfer, timeout); } +#include #endif int z_impl_k_pipe_put(struct k_pipe *pipe, void *data, size_t bytes_to_write, @@ -739,12 +738,9 @@ int z_impl_k_pipe_put(struct k_pipe *pipe, void *data, size_t bytes_to_write, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_pipe_put, pipe, data, bytes_to_write, bytes_written_p, - min_xfer_p, timeout) +int z_vrfy_k_pipe_put(struct k_pipe *pipe, void *data, size_t bytes_to_write, + size_t *bytes_written, size_t min_xfer, s32_t timeout) { - size_t *bytes_written = (size_t *)bytes_written_p; - size_t min_xfer = (size_t)min_xfer_p; - Z_OOPS(Z_SYSCALL_OBJ(pipe, K_OBJ_PIPE)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(bytes_written, sizeof(*bytes_written))); Z_OOPS(Z_SYSCALL_MEMORY_READ((void *)data, bytes_to_write)); @@ -754,6 +750,7 @@ Z_SYSCALL_HANDLER(k_pipe_put, pipe, data, bytes_to_write, bytes_written_p, bytes_to_write, bytes_written, min_xfer, timeout); } +#include #endif #if (CONFIG_NUM_PIPE_ASYNC_MSGS > 0) diff --git a/kernel/poll.c b/kernel/poll.c index a3c088cc21a..b83e5a8741a 100644 --- a/kernel/poll.c +++ b/kernel/poll.c @@ -259,7 +259,7 @@ int z_impl_k_poll(struct k_poll_event *events, int num_events, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_poll, events, num_events, timeout) +static inline int z_vrfy_k_poll(struct k_poll_event *events, int num_events, s32_t timeout) { int ret; k_spinlock_key_t key; @@ -291,7 +291,7 @@ Z_SYSCALL_HANDLER(k_poll, events, num_events, timeout) k_spin_unlock(&lock, key); goto oops_free; } - (void)memcpy(events_copy, (void *)events, bounds); + (void)memcpy(events_copy, events, bounds); k_spin_unlock(&lock, key); /* Validate what's inside events_copy */ @@ -331,6 +331,7 @@ oops_free: k_free(events_copy); Z_OOPS(1); } +#include #endif /* must be called with interrupts locked */ @@ -389,12 +390,12 @@ void z_impl_k_poll_signal_init(struct k_poll_signal *signal) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_poll_signal_init, signal) +static inline void z_vrfy_k_poll_signal_init(struct k_poll_signal *signal) { Z_OOPS(Z_SYSCALL_OBJ_INIT(signal, K_OBJ_POLL_SIGNAL)); - z_impl_k_poll_signal_init((struct k_poll_signal *)signal); - return 0; + z_impl_k_poll_signal_init(signal); } +#include #endif void z_impl_k_poll_signal_check(struct k_poll_signal *signal, @@ -405,16 +406,15 @@ void z_impl_k_poll_signal_check(struct k_poll_signal *signal, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_poll_signal_check, signal, signaled, result) +void z_vrfy_k_poll_signal_check(struct k_poll_signal *signal, + unsigned int *signaled, int *result) { Z_OOPS(Z_SYSCALL_OBJ(signal, K_OBJ_POLL_SIGNAL)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(signaled, sizeof(unsigned int))); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(result, sizeof(int))); - - z_impl_k_poll_signal_check((struct k_poll_signal *)signal, - (unsigned int *)signaled, (int *)result); - return 0; + z_impl_k_poll_signal_check(signal, signaled, result); } +#include #endif int z_impl_k_poll_signal_raise(struct k_poll_signal *signal, int result) @@ -438,12 +438,19 @@ int z_impl_k_poll_signal_raise(struct k_poll_signal *signal, int result) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_poll_signal_raise, signal, result) +static inline int z_vrfy_k_poll_signal_raise(struct k_poll_signal *signal, int result) { Z_OOPS(Z_SYSCALL_OBJ(signal, K_OBJ_POLL_SIGNAL)); - return z_impl_k_poll_signal_raise((struct k_poll_signal *)signal, result); + return z_impl_k_poll_signal_raise(signal, result); } -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_poll_signal_reset, K_OBJ_POLL_SIGNAL, - struct k_poll_signal *); +#include + +static inline void z_vrfy_k_poll_signal_reset(struct k_poll_signal *signal) +{ + Z_OOPS(Z_SYSCALL_OBJ(signal, K_OBJ_POLL_SIGNAL)); + z_impl_k_poll_signal_reset(signal); +} +#include + #endif diff --git a/kernel/queue.c b/kernel/queue.c index da035596950..81b7ad4fe2d 100644 --- a/kernel/queue.c +++ b/kernel/queue.c @@ -91,14 +91,12 @@ void z_impl_k_queue_init(struct k_queue *queue) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_queue_init, queue_ptr) +static inline void z_vrfy_k_queue_init(struct k_queue *queue) { - struct k_queue *queue = (struct k_queue *)queue_ptr; - Z_OOPS(Z_SYSCALL_OBJ_NEVER_INIT(queue, K_OBJ_QUEUE)); z_impl_k_queue_init(queue); - return 0; } +#include #endif #if !defined(CONFIG_POLL) @@ -135,8 +133,12 @@ void z_impl_k_queue_cancel_wait(struct k_queue *queue) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_queue_cancel_wait, K_OBJ_QUEUE, - struct k_queue *); +static inline void z_vrfy_k_queue_cancel_wait(struct k_queue *queue) +{ + Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); + z_impl_k_queue_cancel_wait(queue); +} +#include #endif static s32_t queue_insert(struct k_queue *queue, void *prev, void *data, @@ -203,13 +205,12 @@ s32_t z_impl_k_queue_alloc_append(struct k_queue *queue, void *data) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_queue_alloc_append, queue, data) +static inline s32_t z_vrfy_k_queue_alloc_append(struct k_queue *queue, void *data) { Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); - - return z_impl_k_queue_alloc_append((struct k_queue *)queue, - (void *)data); + return z_impl_k_queue_alloc_append(queue, data); } +#include #endif s32_t z_impl_k_queue_alloc_prepend(struct k_queue *queue, void *data) @@ -218,13 +219,12 @@ s32_t z_impl_k_queue_alloc_prepend(struct k_queue *queue, void *data) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_queue_alloc_prepend, queue, data) +static inline s32_t z_vrfy_k_queue_alloc_prepend(struct k_queue *queue, void *data) { Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); - - return z_impl_k_queue_alloc_prepend((struct k_queue *)queue, - (void *)data); + return z_impl_k_queue_alloc_prepend(queue, data); } +#include #endif void k_queue_append_list(struct k_queue *queue, void *head, void *tail) @@ -345,16 +345,32 @@ void *z_impl_k_queue_get(struct k_queue *queue, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_queue_get, queue, timeout_p) +static inline void *z_vrfy_k_queue_get(struct k_queue *queue, s32_t timeout) { - s32_t timeout = timeout_p; - Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); - - return (u32_t)z_impl_k_queue_get((struct k_queue *)queue, timeout); + return z_impl_k_queue_get(queue, timeout); } +#include + +static inline int z_vrfy_k_queue_is_empty(struct k_queue *queue) +{ + Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); + return z_impl_k_queue_is_empty(queue); +} +#include + +static inline void *z_vrfy_k_queue_peek_head(struct k_queue *queue) +{ + Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); + return z_impl_k_queue_peek_head(queue); +} +#include + +static inline void *z_vrfy_k_queue_peek_tail(struct k_queue *queue) +{ + Z_OOPS(Z_SYSCALL_OBJ(queue, K_OBJ_QUEUE)); + return z_impl_k_queue_peek_tail(queue); +} +#include -Z_SYSCALL_HANDLER1_SIMPLE(k_queue_is_empty, K_OBJ_QUEUE, struct k_queue *); -Z_SYSCALL_HANDLER1_SIMPLE(k_queue_peek_head, K_OBJ_QUEUE, struct k_queue *); -Z_SYSCALL_HANDLER1_SIMPLE(k_queue_peek_tail, K_OBJ_QUEUE, struct k_queue *); #endif /* CONFIG_USERSPACE */ diff --git a/kernel/sched.c b/kernel/sched.c index 17557700b9e..541ebdd80a0 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -842,8 +842,12 @@ int z_impl_k_thread_priority_get(k_tid_t thread) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE(k_thread_priority_get, K_OBJ_THREAD, - struct k_thread *); +static inline int z_vrfy_k_thread_priority_get(k_tid_t thread) +{ + Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); + return z_impl_k_thread_priority_get(thread); +} +#include #endif void z_impl_k_thread_priority_set(k_tid_t tid, int prio) @@ -861,20 +865,18 @@ void z_impl_k_thread_priority_set(k_tid_t tid, int prio) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_thread_priority_set, thread_p, prio) +static inline void z_vrfy_k_thread_priority_set(k_tid_t thread, int prio) { - struct k_thread *thread = (struct k_thread *)thread_p; - Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); Z_OOPS(Z_SYSCALL_VERIFY_MSG(_is_valid_prio(prio, NULL), - "invalid thread priority %d", (int)prio)); + "invalid thread priority %d", prio)); Z_OOPS(Z_SYSCALL_VERIFY_MSG((s8_t)prio >= thread->base.prio, "thread priority may only be downgraded (%d < %d)", prio, thread->base.prio)); - z_impl_k_thread_priority_set((k_tid_t)thread, prio); - return 0; + z_impl_k_thread_priority_set(thread, prio); } +#include #endif #ifdef CONFIG_SCHED_DEADLINE @@ -927,7 +929,11 @@ void z_impl_k_yield(void) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER0_SIMPLE_VOID(k_yield); +static inline void z_vrfy_k_yield(void) +{ + z_impl_k_yield(); +} +#include #endif static s32_t z_tick_sleep(s32_t ticks) @@ -985,10 +991,11 @@ s32_t z_impl_k_sleep(int ms) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_sleep, ms) +static inline s32_t z_vrfy_k_sleep(int ms) { return z_impl_k_sleep(ms); } +#include #endif s32_t z_impl_k_usleep(int us) @@ -1001,10 +1008,11 @@ s32_t z_impl_k_usleep(int us) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_usleep, us) +static inline s32_t z_vrfy_k_usleep(int us) { return z_impl_k_usleep(us); } +#include #endif void z_impl_k_wakeup(k_tid_t thread) @@ -1079,7 +1087,12 @@ void z_sched_abort(struct k_thread *thread) #endif #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_wakeup, K_OBJ_THREAD, k_tid_t); +static inline void z_vrfy_k_wakeup(k_tid_t thread) +{ + Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); + z_impl_k_wakeup(thread); +} +#include #endif k_tid_t z_impl_k_current_get(void) @@ -1088,7 +1101,11 @@ k_tid_t z_impl_k_current_get(void) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER0_SIMPLE(k_current_get); +static inline k_tid_t z_vrfy_k_current_get(void) +{ + return z_impl_k_current_get(); +} +#include #endif int z_impl_k_is_preempt_thread(void) @@ -1097,7 +1114,11 @@ int z_impl_k_is_preempt_thread(void) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER0_SIMPLE(k_is_preempt_thread); +static inline int z_vrfy_k_is_preempt_thread(void) +{ + return z_impl_k_is_preempt_thread(); +} +#include #endif #ifdef CONFIG_SCHED_CPU_MASK diff --git a/kernel/sem.c b/kernel/sem.c index e9b7a5ed60e..0d8ce510f9a 100644 --- a/kernel/sem.c +++ b/kernel/sem.c @@ -80,13 +80,14 @@ void z_impl_k_sem_init(struct k_sem *sem, unsigned int initial_count, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_sem_init, sem, initial_count, limit) +void z_vrfy_k_sem_init(struct k_sem *sem, unsigned int initial_count, + unsigned int limit) { Z_OOPS(Z_SYSCALL_OBJ_INIT(sem, K_OBJ_SEM)); Z_OOPS(Z_SYSCALL_VERIFY(limit != 0 && initial_count <= limit)); - z_impl_k_sem_init((struct k_sem *)sem, initial_count, limit); - return 0; + z_impl_k_sem_init(sem, initial_count, limit); } +#include #endif static inline void handle_poll_events(struct k_sem *sem) @@ -127,7 +128,12 @@ void z_impl_k_sem_give(struct k_sem *sem) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_sem_give, K_OBJ_SEM, struct k_sem *); +static inline void z_vrfy_k_sem_give(struct k_sem *sem) +{ + Z_OOPS(Z_SYSCALL_OBJ(sem, K_OBJ_SEM)); + z_impl_k_sem_give(sem); +} +#include #endif int z_impl_k_sem_take(struct k_sem *sem, s32_t timeout) @@ -157,12 +163,25 @@ int z_impl_k_sem_take(struct k_sem *sem, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_sem_take, sem, timeout) +static inline int z_vrfy_k_sem_take(struct k_sem *sem, s32_t timeout) { Z_OOPS(Z_SYSCALL_OBJ(sem, K_OBJ_SEM)); return z_impl_k_sem_take((struct k_sem *)sem, timeout); } +#include + +static inline void z_vrfy_k_sem_reset(struct k_sem *sem) +{ + Z_OOPS(Z_SYSCALL_OBJ(sem, K_OBJ_SEM)); + z_impl_k_sem_reset(sem); +} +#include + +static inline unsigned int z_vrfy_k_sem_count_get(struct k_sem *sem) +{ + Z_OOPS(Z_SYSCALL_OBJ(sem, K_OBJ_SEM)); + return z_impl_k_sem_count_get(sem); +} +#include -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_sem_reset, K_OBJ_SEM, struct k_sem *); -Z_SYSCALL_HANDLER1_SIMPLE(k_sem_count_get, K_OBJ_SEM, struct k_sem *); #endif diff --git a/kernel/stack.c b/kernel/stack.c index 6f730356699..12dc63031c6 100644 --- a/kernel/stack.c +++ b/kernel/stack.c @@ -71,13 +71,13 @@ s32_t z_impl_k_stack_alloc_init(struct k_stack *stack, u32_t num_entries) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_stack_alloc_init, stack, num_entries) +static inline s32_t z_vrfy_k_stack_alloc_init(struct k_stack *stack, u32_t num_entries) { Z_OOPS(Z_SYSCALL_OBJ_NEVER_INIT(stack, K_OBJ_STACK)); Z_OOPS(Z_SYSCALL_VERIFY(num_entries > 0)); - - return z_impl_k_stack_alloc_init((struct k_stack *)stack, num_entries); + return z_impl_k_stack_alloc_init(stack, num_entries); } +#include #endif void k_stack_cleanup(struct k_stack *stack) @@ -118,17 +118,14 @@ void z_impl_k_stack_push(struct k_stack *stack, stack_data_t data) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_stack_push, stack_p, data) +static inline void z_vrfy_k_stack_push(struct k_stack *stack, stack_data_t data) { - struct k_stack *stack = (struct k_stack *)stack_p; - Z_OOPS(Z_SYSCALL_OBJ(stack, K_OBJ_STACK)); Z_OOPS(Z_SYSCALL_VERIFY_MSG(stack->next != stack->top, "stack is full")); - z_impl_k_stack_push(stack, data); - return 0; } +#include #endif int z_impl_k_stack_pop(struct k_stack *stack, stack_data_t *data, s32_t timeout) @@ -160,12 +157,11 @@ int z_impl_k_stack_pop(struct k_stack *stack, stack_data_t *data, s32_t timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_stack_pop, stack, data, timeout) +static inline int z_vrfy_k_stack_pop(struct k_stack *stack, stack_data_t *data, s32_t timeout) { Z_OOPS(Z_SYSCALL_OBJ(stack, K_OBJ_STACK)); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(data, sizeof(stack_data_t))); - - return z_impl_k_stack_pop((struct k_stack *)stack, (stack_data_t *)data, - timeout); + return z_impl_k_stack_pop(stack, data, timeout); } +#include #endif diff --git a/kernel/thread.c b/kernel/thread.c index 19478ea844c..b8a63d0e39f 100644 --- a/kernel/thread.c +++ b/kernel/thread.c @@ -119,11 +119,11 @@ void z_impl_k_busy_wait(u32_t usec_to_wait) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_busy_wait, usec_to_wait) +static inline void z_vrfy_k_busy_wait(u32_t usec_to_wait) { z_impl_k_busy_wait(usec_to_wait); - return 0; } +#include #endif /* CONFIG_USERSPACE */ #endif /* CONFIG_SYS_CLOCK_EXISTS */ @@ -134,11 +134,11 @@ void z_impl_k_thread_custom_data_set(void *value) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_thread_custom_data_set, data) +static inline void z_vrfy_k_thread_custom_data_set(void *data) { - z_impl_k_thread_custom_data_set((void *)data); - return 0; + z_impl_k_thread_custom_data_set(data); } +#include #endif void *z_impl_k_thread_custom_data_get(void) @@ -147,7 +147,12 @@ void *z_impl_k_thread_custom_data_get(void) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER0_SIMPLE(k_thread_custom_data_get); +static inline void *z_vrfy_k_thread_custom_data_get(void) +{ + return z_impl_k_thread_custom_data_get(); +} +#include + #endif /* CONFIG_USERSPACE */ #endif /* CONFIG_THREAD_CUSTOM_DATA */ @@ -196,13 +201,11 @@ int z_impl_k_thread_name_set(struct k_thread *thread, const char *value) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_thread_name_set, thread, str_param) +static inline int z_vrfy_k_thread_name_set(struct k_thread *t, const char *str) { #ifdef CONFIG_THREAD_NAME - struct k_thread *t = (struct k_thread *)thread; size_t len; int err; - const char *str = (const char *)str_param; if (t != NULL) { if (Z_SYSCALL_OBJ(t, K_OBJ_THREAD) != 0) { @@ -223,6 +226,7 @@ Z_SYSCALL_HANDLER(k_thread_name_set, thread, str_param) return -ENOSYS; #endif /* CONFIG_THREAD_NAME */ } +#include #endif /* CONFIG_USERSPACE */ const char *k_thread_name_get(struct k_thread *thread) @@ -280,11 +284,10 @@ const char *k_thread_state_str(k_tid_t thread_id) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_thread_name_copy, thread_id, buf, size) +static inline int z_vrfy_k_thread_name_copy(k_tid_t t, char *buf, size_t size) { #ifdef CONFIG_THREAD_NAME size_t len; - struct k_thread *t = (struct k_thread *)thread_id; struct _k_object *ko = z_object_find(t); /* Special case: we allow reading the names of initialized threads @@ -304,12 +307,13 @@ Z_SYSCALL_HANDLER(k_thread_name_copy, thread_id, buf, size) return z_user_to_copy((void *)buf, t->name, len + 1); #else - ARG_UNUSED(thread_id); + ARG_UNUSED(t); ARG_UNUSED(buf); ARG_UNUSED(size); return -ENOSYS; #endif /* CONFIG_THREAD_NAME */ } +#include #endif /* CONFIG_USERSPACE */ @@ -362,7 +366,12 @@ void z_impl_k_thread_start(struct k_thread *thread) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_thread_start, K_OBJ_THREAD, struct k_thread *); +static inline void z_vrfy_k_thread_start(struct k_thread *thread) +{ + Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); + return z_impl_k_thread_start(thread); +} +#include #endif #endif @@ -542,18 +551,14 @@ k_tid_t z_impl_k_thread_create(struct k_thread *new_thread, #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_thread_create, - new_thread_p, stack_p, stack_size, entry, p1, more_args) +k_tid_t z_vrfy_k_thread_create(struct k_thread *new_thread, + k_thread_stack_t *stack, + size_t stack_size, k_thread_entry_t entry, + void *p1, void *p2, void *p3, + int prio, u32_t options, s32_t delay) { - int prio; - u32_t options, delay; u32_t total_size; - struct _k_object *stack_object; - struct k_thread *new_thread = (struct k_thread *)new_thread_p; - volatile struct _syscall_10_args *margs = - (volatile struct _syscall_10_args *)more_args; - k_thread_stack_t *stack = (k_thread_stack_t *)stack_p; /* The thread and stack objects *must* be in an uninitialized state */ Z_OOPS(Z_SYSCALL_OBJ_NEVER_INIT(new_thread, K_OBJ_THREAD)); @@ -568,7 +573,8 @@ Z_SYSCALL_HANDLER(k_thread_create, */ Z_OOPS(Z_SYSCALL_VERIFY_MSG(!u32_add_overflow(K_THREAD_STACK_RESERVED, stack_size, &total_size), - "stack size overflow (%u+%u)", stack_size, + "stack size overflow (%u+%u)", + (unsigned int) stack_size, K_THREAD_STACK_RESERVED)); /* Testing less-than-or-equal since additional room may have been @@ -578,17 +584,6 @@ Z_SYSCALL_HANDLER(k_thread_create, "stack size %u is too big, max is %u", total_size, stack_object->data)); - /* Verify the struct containing args 6-10 */ - Z_OOPS(Z_SYSCALL_MEMORY_READ(margs, sizeof(*margs))); - - /* Stash struct arguments in local variables to prevent switcheroo - * attacks - */ - prio = margs->arg8; - options = margs->arg9; - delay = margs->arg10; - compiler_barrier(); - /* User threads may only create other user threads and they can't * be marked as essential */ @@ -602,17 +597,16 @@ Z_SYSCALL_HANDLER(k_thread_create, Z_OOPS(Z_SYSCALL_VERIFY(z_is_prio_lower_or_equal(prio, _current->base.prio))); - z_setup_new_thread((struct k_thread *)new_thread, stack, stack_size, - (k_thread_entry_t)entry, (void *)p1, - (void *)margs->arg6, (void *)margs->arg7, prio, - options, NULL); + z_setup_new_thread(new_thread, stack, stack_size, + entry, p1, p2, p3, prio, options, NULL); if (delay != K_FOREVER) { schedule_new_thread(new_thread, delay); } - return new_thread_p; + return new_thread; } +#include #endif /* CONFIG_USERSPACE */ #endif /* CONFIG_MULTITHREADING */ @@ -641,7 +635,12 @@ void z_impl_k_thread_suspend(struct k_thread *thread) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_thread_suspend, K_OBJ_THREAD, k_tid_t); +static inline void z_vrfy_k_thread_suspend(struct k_thread *thread) +{ + Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); + z_impl_k_thread_suspend(thread); +} +#include #endif void z_thread_single_resume(struct k_thread *thread) @@ -661,7 +660,12 @@ void z_impl_k_thread_resume(struct k_thread *thread) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_thread_resume, K_OBJ_THREAD, k_tid_t); +static inline void z_vrfy_k_thread_resume(struct k_thread *thread) +{ + Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); + z_impl_k_thread_resume(thread); +} +#include #endif void z_thread_single_abort(struct k_thread *thread) @@ -834,12 +838,21 @@ int z_impl_k_float_disable(struct k_thread *thread) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_float_disable, thread_p) +static inline int z_vrfy_k_float_disable(struct k_thread *thread) { - struct k_thread *thread = (struct k_thread *)thread_p; - Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); - - return z_impl_k_float_disable((struct k_thread *)thread_p); + return z_impl_k_float_disable(thread); } +#include + +static inline void z_vrfy_k_thread_abort(k_tid_t thread) +{ + Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); + Z_OOPS(Z_SYSCALL_VERIFY_MSG(!(thread->base.user_options & K_ESSENTIAL), + "aborting essential thread %p", thread)); + + z_impl_k_thread_abort((struct k_thread *)thread); +} +#include + #endif /* CONFIG_USERSPACE */ diff --git a/kernel/thread_abort.c b/kernel/thread_abort.c index ef474b74315..7de1444d2f1 100644 --- a/kernel/thread_abort.c +++ b/kernel/thread_abort.c @@ -58,16 +58,3 @@ void z_impl_k_thread_abort(k_tid_t thread) } } #endif - -#ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_thread_abort, thread_p) -{ - struct k_thread *thread = (struct k_thread *)thread_p; - Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD)); - Z_OOPS(Z_SYSCALL_VERIFY_MSG(!(thread->base.user_options & K_ESSENTIAL), - "aborting essential thread %p", thread)); - - z_impl_k_thread_abort((struct k_thread *)thread); - return 0; -} -#endif diff --git a/kernel/timeout.c b/kernel/timeout.c index feed788f911..ede4c4e8f91 100644 --- a/kernel/timeout.c +++ b/kernel/timeout.c @@ -31,10 +31,11 @@ static int announce_remaining; int z_clock_hw_cycles_per_sec = CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC; #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(z_clock_hw_cycles_per_sec_runtime_get) +static inline int z_vrfy_z_clock_hw_cycles_per_sec_runtime_get(void) { return z_impl_z_clock_hw_cycles_per_sec_runtime_get(); } +#include #endif /* CONFIG_USERSPACE */ #endif /* CONFIG_TIMER_READS_ITS_FREQUENCY_AT_RUNTIME */ @@ -235,12 +236,9 @@ s64_t z_impl_k_uptime_get(void) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_uptime_get, ret_p) +static inline s64_t z_vrfy_k_uptime_get(void) { - u64_t *ret = (u64_t *)ret_p; - - Z_OOPS(Z_SYSCALL_MEMORY_WRITE(ret, sizeof(*ret))); - *ret = z_impl_k_uptime_get(); - return 0; + return z_impl_k_uptime_get(); } +#include #endif diff --git a/kernel/timer.c b/kernel/timer.c index cbb79684729..f81709db9df 100644 --- a/kernel/timer.c +++ b/kernel/timer.c @@ -122,19 +122,14 @@ void z_impl_k_timer_start(struct k_timer *timer, s32_t duration, s32_t period) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_timer_start, timer, duration_p, period_p) +static inline void z_vrfy_k_timer_start(struct k_timer *timer, s32_t duration, s32_t period) { - s32_t duration, period; - - duration = (s32_t)duration_p; - period = (s32_t)period_p; - Z_OOPS(Z_SYSCALL_VERIFY(duration >= 0 && period >= 0 && (duration != 0 || period != 0))); Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); - z_impl_k_timer_start((struct k_timer *)timer, duration, period); - return 0; + z_impl_k_timer_start(timer, duration, period); } +#include #endif void z_impl_k_timer_stop(struct k_timer *timer) @@ -158,7 +153,12 @@ void z_impl_k_timer_stop(struct k_timer *timer) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE_VOID(k_timer_stop, K_OBJ_TIMER, struct k_timer *); +static inline void z_vrfy_k_timer_stop(struct k_timer *timer) +{ + Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); + z_impl_k_timer_stop(timer); +} +#include #endif u32_t z_impl_k_timer_status_get(struct k_timer *timer) @@ -173,7 +173,12 @@ u32_t z_impl_k_timer_status_get(struct k_timer *timer) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE(k_timer_status_get, K_OBJ_TIMER, struct k_timer *); +static inline u32_t z_vrfy_k_timer_status_get(struct k_timer *timer) +{ + Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); + return z_impl_k_timer_status_get(timer); +} +#include #endif u32_t z_impl_k_timer_status_sync(struct k_timer *timer) @@ -205,17 +210,32 @@ u32_t z_impl_k_timer_status_sync(struct k_timer *timer) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE(k_timer_status_sync, K_OBJ_TIMER, struct k_timer *); -#endif - -#ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER1_SIMPLE(k_timer_remaining_get, K_OBJ_TIMER, struct k_timer *); -Z_SYSCALL_HANDLER1_SIMPLE(k_timer_user_data_get, K_OBJ_TIMER, struct k_timer *); - -Z_SYSCALL_HANDLER(k_timer_user_data_set, timer, user_data) +static inline u32_t z_vrfy_k_timer_status_sync(struct k_timer *timer) { Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); - z_impl_k_timer_user_data_set((struct k_timer *)timer, (void *)user_data); - return 0; + return z_impl_k_timer_status_sync(timer); } +#include + +static inline u32_t z_vrfy_k_timer_remaining_get(struct k_timer *timer) +{ + Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); + return z_impl_k_timer_remaining_get(timer); +} +#include + +static inline void *z_vrfy_k_timer_user_data_get(struct k_timer *timer) +{ + Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); + return z_impl_k_timer_user_data_get(timer); +} +#include + +static inline void z_vrfy_k_timer_user_data_set(struct k_timer *timer, void *user_data) +{ + Z_OOPS(Z_SYSCALL_OBJ(timer, K_OBJ_TIMER)); + z_impl_k_timer_user_data_set(timer, user_data); +} +#include + #endif diff --git a/kernel/userspace.c b/kernel/userspace.c index aabfb301722..d7d152adb40 100644 --- a/kernel/userspace.c +++ b/kernel/userspace.c @@ -761,7 +761,7 @@ static u32_t handler_bad_syscall(u32_t bad_id, u32_t arg2, u32_t arg3, u32_t arg4, u32_t arg5, u32_t arg6, void *ssf) { printk("Bad system call id %u invoked\n", bad_id); - z_arch_syscall_oops(ssf); + z_arch_syscall_oops(_current_cpu->syscall_frame); CODE_UNREACHABLE; /* LCOV_EXCL_LINE */ } @@ -769,7 +769,7 @@ static u32_t handler_no_syscall(u32_t arg1, u32_t arg2, u32_t arg3, u32_t arg4, u32_t arg5, u32_t arg6, void *ssf) { printk("Unimplemented system call\n"); - z_arch_syscall_oops(ssf); + z_arch_syscall_oops(_current_cpu->syscall_frame); CODE_UNREACHABLE; /* LCOV_EXCL_LINE */ } diff --git a/kernel/userspace_handler.c b/kernel/userspace_handler.c index 7919a074822..bef60110518 100644 --- a/kernel/userspace_handler.c +++ b/kernel/userspace_handler.c @@ -36,20 +36,19 @@ static struct _k_object *validate_any_object(void *obj) * To avoid double z_object_find() lookups, we don't call the implementation * function, but call a level deeper. */ -Z_SYSCALL_HANDLER(k_object_access_grant, object, thread) +static inline void z_vrfy_k_object_access_grant(void *object, struct k_thread *thread) { struct _k_object *ko; Z_OOPS(Z_SYSCALL_OBJ_INIT(thread, K_OBJ_THREAD)); - ko = validate_any_object((void *)object); + ko = validate_any_object(object); Z_OOPS(Z_SYSCALL_VERIFY_MSG(ko != NULL, "object %p access denied", - (void *)object)); - z_thread_perms_set(ko, (struct k_thread *)thread); - - return 0; + object)); + z_thread_perms_set(ko, thread); } +#include -Z_SYSCALL_HANDLER(k_object_release, object) +static inline void z_vrfy_k_object_release(void *object) { struct _k_object *ko; @@ -57,15 +56,15 @@ Z_SYSCALL_HANDLER(k_object_release, object) Z_OOPS(Z_SYSCALL_VERIFY_MSG(ko != NULL, "object %p access denied", (void *)object)); z_thread_perms_clear(ko, _current); - - return 0; } +#include -Z_SYSCALL_HANDLER(k_object_alloc, otype) +static inline void *z_vrfy_k_object_alloc(enum k_objects otype) { Z_OOPS(Z_SYSCALL_VERIFY_MSG(otype > K_OBJ_ANY && otype < K_OBJ_LAST && otype != K_OBJ__THREAD_STACK_ELEMENT, "bad object type %d requested", otype)); - return (u32_t)z_impl_k_object_alloc(otype); + return z_impl_k_object_alloc(otype); } +#include diff --git a/lib/libc/minimal/source/stdout/stdout_console.c b/lib/libc/minimal/source/stdout/stdout_console.c index 4ca2d659f71..a0dbc4e613b 100644 --- a/lib/libc/minimal/source/stdout/stdout_console.c +++ b/lib/libc/minimal/source/stdout/stdout_console.c @@ -31,10 +31,11 @@ int z_impl_zephyr_fputc(int c, FILE *stream) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zephyr_fputc, c, stream) +static inline int z_vrfy_zephyr_fputc(int c, FILE *stream) { - return z_impl_zephyr_fputc(c, (FILE *)stream); + return z_impl_zephyr_fputc(c, stream); } +#include #endif int fputc(int c, FILE *stream) diff --git a/lib/os/mutex.c b/lib/os/mutex.c index 8d887a12df1..72ebd755bed 100644 --- a/lib/os/mutex.c +++ b/lib/os/mutex.c @@ -41,15 +41,15 @@ int z_impl_z_sys_mutex_kernel_lock(struct sys_mutex *mutex, s32_t timeout) return k_mutex_lock(kernel_mutex, timeout); } -Z_SYSCALL_HANDLER(z_sys_mutex_kernel_lock, mutex, timeout) +static inline int z_vrfy_z_sys_mutex_kernel_lock(struct sys_mutex *mutex, s32_t timeout) { - if (check_sys_mutex_addr(mutex)) { + if (check_sys_mutex_addr((u32_t) mutex)) { return -EACCES; } - return z_impl_z_sys_mutex_kernel_lock((struct sys_mutex *)mutex, - timeout); + return z_impl_z_sys_mutex_kernel_lock(mutex, timeout); } +#include int z_impl_z_sys_mutex_kernel_unlock(struct sys_mutex *mutex) { @@ -67,12 +67,12 @@ int z_impl_z_sys_mutex_kernel_unlock(struct sys_mutex *mutex) return 0; } -Z_SYSCALL_HANDLER(z_sys_mutex_kernel_unlock, mutex) +static inline int z_vrfy_z_sys_mutex_kernel_unlock(struct sys_mutex *mutex) { - if (check_sys_mutex_addr(mutex)) { + if (check_sys_mutex_addr((u32_t) mutex)) { return -EACCES; } - return z_impl_z_sys_mutex_kernel_unlock((struct sys_mutex *)mutex); + return z_impl_z_sys_mutex_kernel_unlock(mutex); } - +#include diff --git a/lib/os/printk.c b/lib/os/printk.c index 1e4e270d3fc..4c95a3b6b4c 100644 --- a/lib/os/printk.c +++ b/lib/os/printk.c @@ -356,13 +356,12 @@ void z_impl_k_str_out(char *c, size_t n) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(k_str_out, c, n) +static inline void z_vrfy_k_str_out(char *c, size_t n) { Z_OOPS(Z_SYSCALL_MEMORY_READ(c, n)); z_impl_k_str_out((char *)c, n); - - return 0; } +#include #endif /** diff --git a/scripts/gen_syscall_header.py b/scripts/gen_syscall_header.py index d0f8114719c..b08c4d53808 100755 --- a/scripts/gen_syscall_header.py +++ b/scripts/gen_syscall_header.py @@ -7,173 +7,18 @@ """ Generation script for syscall_macros.h -The generation of macros for invoking system calls of various number -of arguments, in different execution types (supervisor only, user only, -mixed supervisor/user code) is tedious and repetitive. Rather than writing -by hand, this script generates it. +Except for a single transitive include, this header is empty. The +generated code that used to live here is now emitted by +gen_syscalls.py directly. This script has no inputs, and emits the generated header to stdout. """ import sys -from enum import Enum - - -class Retval(Enum): - VOID = 0 - U32 = 1 - U64 = 2 - - -def gen_macro(ret, argc): - if ret == Retval.VOID: - suffix = "_VOID" - elif ret == Retval.U64: - suffix = "_RET64" - else: - suffix = "" - - sys.stdout.write("K_SYSCALL_DECLARE%d%s(id, name" % (argc, suffix)) - if ret != Retval.VOID: - sys.stdout.write(", ret") - for i in range(argc): - sys.stdout.write(", t%d, p%d" % (i, i)) - sys.stdout.write(")") - - -def gen_fn(ret, argc, name, extern=False): - sys.stdout.write("\t%s %s %s(" % - (("extern" if extern else "static inline"), - ("ret" if ret != Retval.VOID else "void"), name)) - if argc == 0: - sys.stdout.write("void") - else: - for i in range(argc): - sys.stdout.write("t%d p%d" % (i, i)) - if i != (argc - 1): - sys.stdout.write(", ") - sys.stdout.write(")") - - -def tabs(count): - sys.stdout.write("\t" * count) - - -def gen_make_syscall(ret, argc, tabcount): - tabs(tabcount) - - # The core kernel is built with the --no-whole-archive linker option. - # For all the individual .o files which make up the kernel, if there - # are no external references to symbols within these object files, - # everything in the object file is dropped. - # - # This has a subtle interaction with system call handlers. If an object - # file has system call handler inside it, and nothing else in the - # object file is referenced, then the linker will prefer the weak - # version of the handler in the generated syscall_dispatch.c. The - # user will get an "unimplemented system call" error if the associated - # system call for that handler is made. - # - # Fix this by making a fake reference to the handler function at the - # system call site. The address gets stored inside a special section - # "hndlr_ref". This is enough to prevent the handlers from being - # dropped, and the hndlr_ref section is itself dropped from the binary - # from gc-sections; these references will not consume space. - - sys.stdout.write( - "static Z_GENERIC_SECTION(hndlr_ref) __used void *href = (void *)&z_hdlr_##name; \\\n") - tabs(tabcount) - if ret != Retval.VOID: - sys.stdout.write("return (ret)") - else: - sys.stdout.write("return (void)") - if (argc <= 6 and ret != Retval.U64): - sys.stdout.write("z_arch_syscall%s_invoke%d(" % - (("_ret64" if ret == Retval.U64 else ""), argc)) - else: - sys.stdout.write("z_syscall%s_invoke%d(" % - (("_ret64" if ret == Retval.U64 else ""), argc)) - for i in range(argc): - sys.stdout.write("(u32_t)p%d, " % (i)) - sys.stdout.write("id); \\\n") - - -def gen_call_impl(ret, argc): - if ret != Retval.VOID: - sys.stdout.write("return ") - sys.stdout.write("z_impl_##name(") - for i in range(argc): - sys.stdout.write("p%d" % (i)) - if i != (argc - 1): - sys.stdout.write(", ") - sys.stdout.write("); \\\n") - - -def newline(): - sys.stdout.write(" \\\n") - - -def gen_defines_inner(ret, argc, kernel_only=False, user_only=False): - sys.stdout.write("#define ") - gen_macro(ret, argc) - newline() - - if not user_only: - gen_fn(ret, argc, "z_impl_##name", extern=True) - sys.stdout.write(";") - newline() - - gen_fn(ret, argc, "name") - newline() - sys.stdout.write("\t{") - newline() - - if kernel_only: - sys.stdout.write("\t\t") - gen_call_impl(ret, argc) - elif user_only: - gen_make_syscall(ret, argc, 2) - else: - sys.stdout.write("\t\tif (_is_user_context()) {") - newline() - - gen_make_syscall(ret, argc, 3) - - sys.stdout.write("\t\t} else {") - newline() - - # Prevent memory access issues if the implementation function gets - # inlined - sys.stdout.write("\t\t\tcompiler_barrier();") - newline() - - sys.stdout.write("\t\t\t") - gen_call_impl(ret, argc) - sys.stdout.write("\t\t}") - newline() - - sys.stdout.write("\t}\n\n") - - -def gen_defines(argc, kernel_only=False, user_only=False): - gen_defines_inner(Retval.VOID, argc, kernel_only, user_only) - gen_defines_inner(Retval.U32, argc, kernel_only, user_only) - gen_defines_inner(Retval.U64, argc, kernel_only, user_only) - sys.stdout.write( "/* Auto-generated by gen_syscall_header.py, do not edit! */\n\n") sys.stdout.write("#ifndef GEN_SYSCALL_H\n#define GEN_SYSCALL_H\n\n") sys.stdout.write("#include \n") -for i in range(11): - sys.stdout.write( - "#if !defined(CONFIG_USERSPACE) || defined(__ZEPHYR_SUPERVISOR__)\n") - gen_defines(i, kernel_only=True) - sys.stdout.write("#elif defined(__ZEPHYR_USER__)\n") - gen_defines(i, user_only=True) - sys.stdout.write("#else /* mixed kernel/user macros */\n") - gen_defines(i) - sys.stdout.write("#endif /* mixed kernel/user macros */\n\n") - sys.stdout.write("#endif /* GEN_SYSCALL_H */\n") diff --git a/scripts/gen_syscalls.py b/scripts/gen_syscalls.py index ada81acc0a1..7d8bff5b9ea 100755 --- a/scripts/gen_syscalls.py +++ b/scripts/gen_syscalls.py @@ -29,6 +29,20 @@ import argparse import os import json +types64 = ["s64_t", "u64_t"] + +# The kernel linkage is complicated. These functions from +# userspace_handlers.c are present in the kernel .a library after +# userspace.c, which contains the weak fallbacks defined here. So the +# linker finds the weak one first and stops searching, and thus won't +# see the real implementation which should override. Yet changing the +# order runs afoul of a comment in CMakeLists.txt that the order is +# critical. These are core syscalls that won't ever be unconfigured, +# just disable the fallback mechanism as a simple workaround. +noweak = set(["z_mrsh_k_object_release", + "z_mrsh_k_object_access_grant", + "z_mrsh_k_object_alloc"]) + table_template = """/* auto-generated by gen_syscalls.py, don't edit */ /* Weak handler functions that get replaced by the real ones unless a system @@ -52,16 +66,6 @@ list_template = """ #include -#ifdef __cplusplus -extern "C" { -#endif - -%s - -#ifdef __cplusplus -} -#endif - #endif /* _ASMLANGUAGE */ #endif /* ZEPHYR_SYSCALL_LIST_H */ @@ -69,12 +73,16 @@ extern "C" { syscall_template = """ /* auto-generated by gen_syscalls.py, don't edit */ +%s #ifndef _ASMLANGUAGE #include #include +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wstrict-aliasing" + #ifdef __cplusplus extern "C" { #endif @@ -85,11 +93,14 @@ extern "C" { } #endif +#pragma GCC diagnostic pop + #endif +#endif /* include guard */ """ handler_template = """ -extern u32_t %s(u32_t arg1, u32_t arg2, u32_t arg3, +extern u32_t z_hdlr_%s(u32_t arg1, u32_t arg2, u32_t arg3, u32_t arg4, u32_t arg5, u32_t arg6, void *ssf); """ @@ -124,6 +135,164 @@ def typename_split(item): m = mo.groups() return (m[0].strip(), m[1]) +def need_split(argtype): + return argtype in types64 + +# Note: "lo" and "hi" are named in little endian conventions, +# but it doesn't matter as long as they are consistently +# generated. +def union_decl(type): + return "union { struct { u32_t lo, hi; } split; %s val; }" % type + +def wrapper_defs(func_name, func_type, args): + ret64 = func_type in types64 + mrsh_args = [] # List of rvalue expressions for the marshalled invocation + split_args = [] + nsplit = 0 + for i, argrec in enumerate(args): + (argtype, argname) = argrec + if need_split(argtype): + split_args.append((argtype, argname)) + mrsh_args.append("parm%d.split.lo" % nsplit) + mrsh_args.append("parm%d.split.hi" % nsplit) + nsplit += 1 + else: + mrsh_args.append("*(u32_t *)&" + argname) + + if ret64: + mrsh_args.append("(u32_t)&ret64") + + decl_arglist = ", ".join([" ".join(argrec) for argrec in args]) + + wrap = "extern %s z_impl_%s(%s);\n" % (func_type, func_name, decl_arglist) + wrap += "static inline %s %s(%s)\n" % (func_type, func_name, decl_arglist) + wrap += "{\n" + wrap += "#ifdef CONFIG_USERSPACE\n" + wrap += ("\t" + "u64_t ret64;\n") if ret64 else "" + wrap += "\t" + "if (z_syscall_trap()) {\n" + + for parmnum, rec in enumerate(split_args): + (argtype, argname) = rec + wrap += "\t\t%s parm%d;\n" % (union_decl(argtype), parmnum) + wrap += "\t\t" + "parm%d.val = %s;\n" % (parmnum, argname) + + if len(mrsh_args) > 6: + wrap += "\t\t" + "u32_t more[] = {\n" + wrap += "\t\t\t" + (",\n\t\t\t".join(mrsh_args[5:])) + "\n" + wrap += "\t\t" + "};\n" + mrsh_args[5:] = ["(u32_t) &more"] + + syscall_id = "K_SYSCALL_" + func_name.upper() + invoke = ("z_arch_syscall_invoke%d(%s)" + % (len(mrsh_args), + ", ".join(mrsh_args + [syscall_id]))) + + if ret64: + wrap += "\t\t" + "(void)%s;\n" % invoke + wrap += "\t\t" + "return (%s)ret64;\n" % func_type + elif func_type == "void": + wrap += "\t\t" + "%s;\n" % invoke + wrap += "\t\t" + "return;\n"; + else: + wrap += "\t\t" + "return (%s) %s;\n" % (func_type, invoke) + + wrap += "\t" + "}\n" + wrap += "#endif\n" + + # Otherwise fall through to direct invocation of the impl func. + # Note the compiler barrier: that is required to prevent code from + # the impl call from being hoisted above the check for user + # context. + impl_arglist = ", ".join([argrec[1] for argrec in args]) + impl_call = "z_impl_%s(%s)" % (func_name, impl_arglist) + wrap += "\t" + "compiler_barrier();\n" + wrap += "\t" + "%s%s;\n" % ("return " if func_type != "void" else "", + impl_call) + + wrap += "}\n" + + return wrap + +# Returns an expression for the specified (zero-indexed!) marshalled +# parameter to a syscall, with handling for a final "more" parameter. +def mrsh_rval(mrsh_num, total): + if mrsh_num < 5 or total <= 6: + return "arg%d" % mrsh_num + else: + return "(((u32_t *)more)[%d])" % (mrsh_num - 5) + +def marshall_defs(func_name, func_type, args): + mrsh_name = "z_mrsh_" + func_name + + nmrsh = 0 # number of marshalled u32_t parameter + vrfy_parms = [] # list of (arg_num, mrsh_or_parm_num, bool_is_split) + split_parms = [] # list of a (arg_num, mrsh_num) for each split + for i, argrec in enumerate(args): + (argtype, argname) = argrec + if need_split(argtype): + vrfy_parms.append((i, len(split_parms), True)) + split_parms.append((i, nmrsh)) + nmrsh += 2 + else: + vrfy_parms.append((i, nmrsh, False)) + nmrsh += 1 + + # Final argument for a 64 bit return value? + if func_type in types64: + nmrsh += 1 + + decl_arglist = ", ".join([" ".join(argrec) for argrec in args]) + mrsh = "extern %s z_vrfy_%s(%s);\n" % (func_type, func_name, decl_arglist) + + mrsh += "u32_t %s(u32_t arg0, u32_t arg1, u32_t arg2,\n" % mrsh_name + if nmrsh <= 6: + mrsh += "\t\t" + "u32_t arg3, u32_t arg4, u32_t arg5, void *ssf)\n"; + else: + mrsh += "\t\t" + "u32_t arg3, u32_t arg4, void *more, void *ssf)\n"; + mrsh += "{\n" + mrsh += "\t" + "_current_cpu->syscall_frame = ssf;\n"; + + for unused_arg in range(nmrsh, 6): + mrsh += "\t(void) arg%d;\t/* unused */\n" % unused_arg + + if nmrsh > 6: + mrsh += ("\tZ_OOPS(Z_SYSCALL_MEMORY_READ(more, " + + str(nmrsh - 6) + " * sizeof(u32_t)));\n") + + for i, split_rec in enumerate(split_parms): + arg_num, mrsh_num = split_rec + arg_type = args[arg_num][0]; + mrsh += "\t%s parm%d;\n" % (union_decl(arg_type), i); + mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(mrsh_num, + nmrsh)) + mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(mrsh_num + 1, + nmrsh)) + # Finally, invoke the verify function + out_args = [] + for i, argn, is_split in vrfy_parms: + if is_split: + out_args.append("parm%d.val" % argn) + else: + out_args.append("*(%s*)&%s" % (args[i][0], mrsh_rval(argn, nmrsh))) + + vrfy_call = "z_vrfy_%s(%s)\n" % (func_name, ", ".join(out_args)) + + if func_type == "void": + mrsh += "\t" + "%s;\n" % vrfy_call + mrsh += "\t" + "return 0;\n" + else: + mrsh += "\t" + "%s ret = %s;\n" % (func_type, vrfy_call) + if func_type in types64: + ptr = "((u64_t *)%s)" % mrsh_rval(nmrsh - 1, nmrsh) + mrsh += "\t" + "Z_OOPS(Z_SYSCALL_MEMORY_WRITE(%s, 8));\n" % ptr + mrsh += "\t" + "*%s = ret;\n" % ptr + mrsh += "\t" + "return 0;\n" + else: + mrsh += "\t" + "return (u32_t) ret;\n" + + mrsh += "}\n" + + return mrsh, mrsh_name def analyze_fn(match_group): func, args = match_group @@ -141,39 +310,14 @@ def analyze_fn(match_group): sys_id = "K_SYSCALL_" + func_name.upper() - if func_type == "void": - suffix = "_VOID" - is_void = True - else: - is_void = False - if func_type in ["s64_t", "u64_t"]: - suffix = "_RET64" - else: - suffix = "" - - is_void = (func_type == "void") - - # Get the proper system call macro invocation, which depends on the - # number of arguments, the return type, and whether the implementation - # is an inline function - macro = "K_SYSCALL_DECLARE%d%s" % (len(args), suffix) - - # Flatten the argument lists and generate a comma separated list - # of t0, p0, t1, p1, ... tN, pN as expected by the macros - flat_args = [i for sublist in args for i in sublist] - if not is_void: - flat_args = [func_type] + flat_args - flat_args = [sys_id, func_name] + flat_args - argslist = ", ".join(flat_args) - - invocation = "%s(%s)" % (macro, argslist) - - handler = "z_hdlr_" + func_name + marshaller = None + marshaller, handler = marshall_defs(func_name, func_type, args) + invocation = wrapper_defs(func_name, func_type, args) # Entry in _k_syscall_table table_entry = "[%s] = %s" % (sys_id, handler) - return (handler, invocation, sys_id, table_entry) + return (handler, invocation, marshaller, sys_id, table_entry) def parse_args(): global args @@ -189,22 +333,30 @@ def parse_args(): help="output C system call list header") parser.add_argument("-o", "--base-output", required=True, help="Base output directory for syscall macro headers") + parser.add_argument("-s", "--split-type", action="append", + help="A long type that must be split/marshalled") args = parser.parse_args() def main(): parse_args() + if args.split_type != None: + for t in args.split_type: + types64.append(t) + with open(args.json_file, 'r') as fd: syscalls = json.load(fd) invocations = {} + mrsh_defs = {} + mrsh_includes = {} ids = [] table_entries = [] handlers = [] for match_group, fn in syscalls: - handler, inv, sys_id, entry = analyze_fn(match_group) + handler, inv, mrsh, sys_id, entry = analyze_fn(match_group) if fn not in invocations: invocations[fn] = [] @@ -214,12 +366,24 @@ def main(): table_entries.append(entry) handlers.append(handler) + if mrsh: + syscall = typename_split(match_group[0])[1] + mrsh_defs[syscall] = mrsh + mrsh_includes[syscall] = "#include " % fn + with open(args.syscall_dispatch, "w") as fp: table_entries.append("[K_SYSCALL_BAD] = handler_bad_syscall") - weak_defines = "".join([weak_template % name for name in handlers]) + weak_defines = "".join([weak_template % name + for name in handlers + if not name in noweak]) - fp.write(table_template % (weak_defines, ",\n\t".join(table_entries))) + # The "noweak" ones just get a regular declaration + weak_defines += "\n".join(["extern u32_t %s(u32_t arg1, u32_t arg2, u32_t arg3, u32_t arg4, u32_t arg5, u32_t arg6, void *ssf);" + % s for s in noweak]) + + fp.write(table_template % (weak_defines, + ",\n\t".join(table_entries))) # Listing header emitted to stdout ids.sort() @@ -229,19 +393,32 @@ def main(): for i, item in enumerate(ids): ids_as_defines += "#define {} {}\n".format(item, i) - handler_defines = "".join([handler_template % name for name in handlers]) with open(args.syscall_list, "w") as fp: - fp.write(list_template % (ids_as_defines, handler_defines)) + fp.write(list_template % ids_as_defines) os.makedirs(args.base_output, exist_ok=True) for fn, invo_list in invocations.items(): out_fn = os.path.join(args.base_output, fn) - header = syscall_template % "\n\n".join(invo_list) + ig = re.sub("[^a-zA-Z0-9]", "_", "Z_INCLUDE_SYSCALLS_" + fn).upper() + include_guard = "#ifndef %s\n#define %s\n" % (ig, ig) + header = syscall_template % (include_guard, "\n\n".join(invo_list)) with open(out_fn, "w") as fp: fp.write(header) + # Likewise emit _mrsh.c files for syscall inclusion + for fn in mrsh_defs: + mrsh_fn = os.path.join(args.base_output, fn + "_mrsh.c") + + with open(mrsh_fn, "w") as fp: + fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n") + fp.write("#pragma GCC diagnostic push\n") + fp.write("#pragma GCC diagnostic ignored \"-Wstrict-aliasing\"\n") + fp.write(mrsh_includes[fn] + "\n") + fp.write("\n") + fp.write(mrsh_defs[fn] + "\n") + fp.write("#pragma GCC diagnostic pop\n") if __name__ == "__main__": main() diff --git a/subsys/net/ip/net_if.c b/subsys/net/ip/net_if.c index b2e3fa568dd..88817d078be 100644 --- a/subsys/net/ip/net_if.c +++ b/subsys/net/ip/net_if.c @@ -1148,7 +1148,7 @@ int z_impl_net_if_ipv6_addr_lookup_by_index(const struct in6_addr *addr) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv6_addr_lookup_by_index, addr) +static inline int z_vrfy_net_if_ipv6_addr_lookup_by_index(const struct in6_addr *addr) { struct in6_addr addr_v6; @@ -1156,6 +1156,7 @@ Z_SYSCALL_HANDLER(net_if_ipv6_addr_lookup_by_index, addr) return z_impl_net_if_ipv6_addr_lookup_by_index(&addr_v6); } +#include #endif static bool check_timeout(u32_t start, s32_t timeout, u32_t counter, @@ -1491,8 +1492,10 @@ bool z_impl_net_if_ipv6_addr_add_by_index(int index, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv6_addr_add_by_index, index, addr, addr_type, - vlifetime) +bool z_vrfy_net_if_ipv6_addr_add_by_index(int index, + struct in6_addr *addr, + enum net_addr_type addr_type, + u32_t vlifetime) { #if defined(CONFIG_NET_IF_USERSPACE_ACCESS) struct in6_addr addr_v6; @@ -1507,6 +1510,7 @@ Z_SYSCALL_HANDLER(net_if_ipv6_addr_add_by_index, index, addr, addr_type, return false; #endif /* CONFIG_NET_IF_USERSPACE_ACCESS */ } +#include #endif /* CONFIG_USERSPACE */ bool z_impl_net_if_ipv6_addr_rm_by_index(int index, @@ -1523,7 +1527,8 @@ bool z_impl_net_if_ipv6_addr_rm_by_index(int index, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv6_addr_rm_by_index, index, addr) +bool z_vrfy_net_if_ipv6_addr_rm_by_index(int index, + const struct in6_addr *addr) { #if defined(CONFIG_NET_IF_USERSPACE_ACCESS) struct in6_addr addr_v6; @@ -1535,6 +1540,7 @@ Z_SYSCALL_HANDLER(net_if_ipv6_addr_rm_by_index, index, addr) return false; #endif /* CONFIG_NET_IF_USERSPACE_ACCESS */ } +#include #endif /* CONFIG_USERSPACE */ struct net_if_mcast_addr *net_if_ipv6_maddr_add(struct net_if *iface, @@ -2795,7 +2801,7 @@ int z_impl_net_if_ipv4_addr_lookup_by_index(const struct in_addr *addr) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv4_addr_lookup_by_index, addr) +static inline int z_vrfy_net_if_ipv4_addr_lookup_by_index(const struct in_addr *addr) { struct in_addr addr_v4; @@ -2803,6 +2809,7 @@ Z_SYSCALL_HANDLER(net_if_ipv4_addr_lookup_by_index, addr) return z_impl_net_if_ipv4_addr_lookup_by_index(&addr_v4); } +#include #endif void net_if_ipv4_set_netmask(struct net_if *iface, @@ -2835,7 +2842,8 @@ bool z_impl_net_if_ipv4_set_netmask_by_index(int index, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv4_set_netmask_by_index, index, netmask) +bool z_vrfy_net_if_ipv4_set_netmask_by_index(int index, + const struct in_addr *netmask) { #if defined(CONFIG_NET_IF_USERSPACE_ACCESS) struct in_addr netmask_addr; @@ -2848,6 +2856,7 @@ Z_SYSCALL_HANDLER(net_if_ipv4_set_netmask_by_index, index, netmask) return false; #endif } +#include #endif /* CONFIG_USERSPACE */ void net_if_ipv4_set_gw(struct net_if *iface, const struct in_addr *gw) @@ -2879,7 +2888,8 @@ bool z_impl_net_if_ipv4_set_gw_by_index(int index, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv4_set_gw_by_index, index, gw) +bool z_vrfy_net_if_ipv4_set_gw_by_index(int index, + const struct in_addr *gw) { #if defined(CONFIG_NET_IF_USERSPACE_ACCESS) struct in_addr gw_addr; @@ -2891,6 +2901,7 @@ Z_SYSCALL_HANDLER(net_if_ipv4_set_gw_by_index, index, gw) return false; #endif } +#include #endif /* CONFIG_USERSPACE */ static struct net_if_addr *ipv4_addr_find(struct net_if *iface, @@ -3034,8 +3045,10 @@ bool z_impl_net_if_ipv4_addr_add_by_index(int index, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv4_addr_add_by_index, index, addr, addr_type, - vlifetime) +bool z_vrfy_net_if_ipv4_addr_add_by_index(int index, + struct in_addr *addr, + enum net_addr_type addr_type, + u32_t vlifetime) { #if defined(CONFIG_NET_IF_USERSPACE_ACCESS) struct in_addr addr_v4; @@ -3050,6 +3063,7 @@ Z_SYSCALL_HANDLER(net_if_ipv4_addr_add_by_index, index, addr, addr_type, return false; #endif /* CONFIG_NET_IF_USERSPACE_ACCESS */ } +#include #endif /* CONFIG_USERSPACE */ bool z_impl_net_if_ipv4_addr_rm_by_index(int index, @@ -3066,7 +3080,8 @@ bool z_impl_net_if_ipv4_addr_rm_by_index(int index, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_if_ipv4_addr_rm_by_index, index, addr) +bool z_vrfy_net_if_ipv4_addr_rm_by_index(int index, + const struct in_addr *addr) { #if defined(CONFIG_NET_IF_USERSPACE_ACCESS) struct in_addr addr_v4; @@ -3078,6 +3093,7 @@ Z_SYSCALL_HANDLER(net_if_ipv4_addr_rm_by_index, index, addr) return false; #endif /* CONFIG_NET_IF_USERSPACE_ACCESS */ } +#include #endif /* CONFIG_USERSPACE */ static struct net_if_mcast_addr *ipv4_maddr_find(struct net_if *iface, diff --git a/subsys/net/ip/utils.c b/subsys/net/ip/utils.c index 02c36cf755e..072babf2f3b 100644 --- a/subsys/net/ip/utils.c +++ b/subsys/net/ip/utils.c @@ -276,7 +276,8 @@ char *z_impl_net_addr_ntop(sa_family_t family, const void *src, } #if defined(CONFIG_USERSPACE) -Z_SYSCALL_HANDLER(net_addr_ntop, family, src, dst, size) +char *z_vrfy_net_addr_ntop(sa_family_t family, const void *src, + char *dst, size_t size) { char str[INET6_ADDRSTRLEN]; struct in6_addr addr6; @@ -305,8 +306,9 @@ Z_SYSCALL_HANDLER(net_addr_ntop, family, src, dst, size) Z_OOPS(z_user_to_copy((void *)dst, str, MIN(size, sizeof(str)))); - return (int)dst; + return dst; } +#include #endif /* CONFIG_USERSPACE */ int z_impl_net_addr_pton(sa_family_t family, const char *src, @@ -443,7 +445,8 @@ int z_impl_net_addr_pton(sa_family_t family, const char *src, } #if defined(CONFIG_USERSPACE) -Z_SYSCALL_HANDLER(net_addr_pton, family, src, dst) +int z_vrfy_net_addr_pton(sa_family_t family, const char *src, + void *dst) { char str[INET6_ADDRSTRLEN]; struct in6_addr addr6; @@ -483,6 +486,7 @@ Z_SYSCALL_HANDLER(net_addr_pton, family, src, dst) return 0; } +#include #endif /* CONFIG_USERSPACE */ static u16_t calc_chksum(u16_t sum, const u8_t *data, size_t len) diff --git a/subsys/net/l2/ethernet/ethernet.c b/subsys/net/l2/ethernet/ethernet.c index f4a3cfec188..8dfd174a846 100644 --- a/subsys/net/l2/ethernet/ethernet.c +++ b/subsys/net/l2/ethernet/ethernet.c @@ -1012,10 +1012,11 @@ struct device *z_impl_net_eth_get_ptp_clock_by_index(int index) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(net_eth_get_ptp_clock_by_index, index) +static inline struct device *z_vrfy_net_eth_get_ptp_clock_by_index(int index) { - return (u32_t)z_impl_net_eth_get_ptp_clock_by_index(index); + return z_impl_net_eth_get_ptp_clock_by_index(index); } +#include #endif /* CONFIG_USERSPACE */ #else /* CONFIG_PTP_CLOCK */ struct device *z_impl_net_eth_get_ptp_clock_by_index(int index) diff --git a/subsys/net/lib/sockets/sockets.c b/subsys/net/lib/sockets/sockets.c index c586bfe1394..5577edb88bd 100644 --- a/subsys/net/lib/sockets/sockets.c +++ b/subsys/net/lib/sockets/sockets.c @@ -150,13 +150,14 @@ int z_impl_zsock_socket(int family, int type, int proto) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_socket, family, type, proto) +static inline int z_vrfy_zsock_socket(int family, int type, int proto) { /* implementation call to net_context_get() should do all necessary * checking */ return z_impl_zsock_socket(family, type, proto); } +#include #endif /* CONFIG_USERSPACE */ int zsock_close_ctx(struct net_context *ctx) @@ -199,10 +200,11 @@ int z_impl_zsock_close(int sock) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_close, sock) +static inline int z_vrfy_zsock_close(int sock) { return z_impl_zsock_close(sock); } +#include #endif /* CONFIG_USERSPACE */ int z_impl_zsock_shutdown(int sock, int how) @@ -304,7 +306,7 @@ int z_impl_zsock_bind(int sock, const struct sockaddr *addr, socklen_t addrlen) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_bind, sock, addr, addrlen) +static inline int z_vrfy_zsock_bind(int sock, const struct sockaddr *addr, socklen_t addrlen) { struct sockaddr_storage dest_addr_copy; @@ -314,6 +316,7 @@ Z_SYSCALL_HANDLER(zsock_bind, sock, addr, addrlen) return z_impl_zsock_bind(sock, (struct sockaddr *)&dest_addr_copy, addrlen); } +#include #endif /* CONFIG_USERSPACE */ int zsock_connect_ctx(struct net_context *ctx, const struct sockaddr *addr, @@ -343,7 +346,8 @@ int z_impl_zsock_connect(int sock, const struct sockaddr *addr, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_connect, sock, addr, addrlen) +int z_vrfy_zsock_connect(int sock, const struct sockaddr *addr, + socklen_t addrlen) { struct sockaddr_storage dest_addr_copy; @@ -353,6 +357,7 @@ Z_SYSCALL_HANDLER(zsock_connect, sock, addr, addrlen) return z_impl_zsock_connect(sock, (struct sockaddr *)&dest_addr_copy, addrlen); } +#include #endif /* CONFIG_USERSPACE */ int zsock_listen_ctx(struct net_context *ctx, int backlog) @@ -369,10 +374,11 @@ int z_impl_zsock_listen(int sock, int backlog) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_listen, sock, backlog) +static inline int z_vrfy_zsock_listen(int sock, int backlog) { return z_impl_zsock_listen(sock, backlog); } +#include #endif /* CONFIG_USERSPACE */ int zsock_accept_ctx(struct net_context *parent, struct sockaddr *addr, @@ -434,7 +440,7 @@ int z_impl_zsock_accept(int sock, struct sockaddr *addr, socklen_t *addrlen) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_accept, sock, addr, addrlen) +static inline int z_vrfy_zsock_accept(int sock, struct sockaddr *addr, socklen_t *addrlen) { socklen_t addrlen_copy; int ret; @@ -458,6 +464,7 @@ Z_SYSCALL_HANDLER(zsock_accept, sock, addr, addrlen) return ret; } +#include #endif /* CONFIG_USERSPACE */ ssize_t zsock_sendto_ctx(struct net_context *ctx, const void *buf, size_t len, @@ -505,7 +512,8 @@ ssize_t z_impl_zsock_sendto(int sock, const void *buf, size_t len, int flags, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_sendto, sock, buf, len, flags, dest_addr, addrlen) +ssize_t z_vrfy_zsock_sendto(int sock, const void *buf, size_t len, int flags, + const struct sockaddr *dest_addr, socklen_t addrlen) { struct sockaddr_storage dest_addr_copy; @@ -520,6 +528,7 @@ Z_SYSCALL_HANDLER(zsock_sendto, sock, buf, len, flags, dest_addr, addrlen) dest_addr ? (struct sockaddr *)&dest_addr_copy : NULL, addrlen); } +#include #endif /* CONFIG_USERSPACE */ ssize_t zsock_sendmsg_ctx(struct net_context *ctx, const struct msghdr *msg, @@ -547,12 +556,13 @@ ssize_t z_impl_zsock_sendmsg(int sock, const struct msghdr *msg, int flags) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_sendmsg, sock, msg, flags) +static inline ssize_t z_vrfy_zsock_sendmsg(int sock, const struct msghdr *msg, int flags) { /* TODO: Create a copy of msg_buf and copy the data there */ return z_impl_zsock_sendmsg(sock, (const struct msghdr *)msg, flags); } +#include #endif /* CONFIG_USERSPACE */ static int sock_get_pkt_src_addr(struct net_pkt *pkt, @@ -850,11 +860,10 @@ ssize_t z_impl_zsock_recvfrom(int sock, void *buf, size_t max_len, int flags, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_recvfrom, sock, buf, max_len, flags, src_addr, - addrlen_param) +ssize_t z_vrfy_zsock_recvfrom(int sock, void *buf, size_t max_len, int flags, + struct sockaddr *src_addr, socklen_t *addrlen) { socklen_t addrlen_copy; - socklen_t *addrlen_ptr = (socklen_t *)addrlen_param; ssize_t ret; if (Z_SYSCALL_MEMORY_WRITE(buf, max_len)) { @@ -862,24 +871,24 @@ Z_SYSCALL_HANDLER(zsock_recvfrom, sock, buf, max_len, flags, src_addr, return -1; } - if (addrlen_param) { - Z_OOPS(z_user_from_copy(&addrlen_copy, - (socklen_t *)addrlen_param, + if (addrlen) { + Z_OOPS(z_user_from_copy(&addrlen_copy, addrlen, sizeof(socklen_t))); } Z_OOPS(src_addr && Z_SYSCALL_MEMORY_WRITE(src_addr, addrlen_copy)); ret = z_impl_zsock_recvfrom(sock, (void *)buf, max_len, flags, (struct sockaddr *)src_addr, - addrlen_param ? &addrlen_copy : NULL); + addrlen ? &addrlen_copy : NULL); - if (addrlen_param) { - Z_OOPS(z_user_to_copy(addrlen_ptr, &addrlen_copy, + if (addrlen) { + Z_OOPS(z_user_to_copy(addrlen, &addrlen_copy, sizeof(socklen_t))); } return ret; } +#include #endif /* CONFIG_USERSPACE */ /* As this is limited function, we don't follow POSIX signature, with @@ -1079,7 +1088,7 @@ int z_impl_zsock_poll(struct zsock_pollfd *fds, int nfds, int timeout) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_poll, fds, nfds, timeout) +static inline int z_vrfy_zsock_poll(struct zsock_pollfd *fds, int nfds, int timeout) { struct zsock_pollfd *fds_copy; size_t fds_size; @@ -1105,6 +1114,7 @@ Z_SYSCALL_HANDLER(zsock_poll, fds, nfds, timeout) return ret; } +#include #endif int z_impl_zsock_inet_pton(sa_family_t family, const char *src, void *dst) @@ -1117,7 +1127,7 @@ int z_impl_zsock_inet_pton(sa_family_t family, const char *src, void *dst) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_inet_pton, family, src, dst) +static inline int z_vrfy_zsock_inet_pton(sa_family_t family, const char *src, void *dst) { int dst_size; char src_copy[NET_IPV6_ADDR_LEN]; @@ -1138,10 +1148,11 @@ Z_SYSCALL_HANDLER(zsock_inet_pton, family, src, dst) Z_OOPS(z_user_string_copy(src_copy, (char *)src, sizeof(src_copy))); ret = z_impl_zsock_inet_pton(family, src_copy, dst_copy); - Z_OOPS(z_user_to_copy((void *)dst, dst_copy, dst_size)); + Z_OOPS(z_user_to_copy(dst, dst_copy, dst_size)); return ret; } +#include #endif int zsock_getsockopt_ctx(struct net_context *ctx, int level, int optname, @@ -1180,7 +1191,8 @@ int z_impl_zsock_getsockopt(int sock, int level, int optname, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_getsockopt, sock, level, optname, optval, optlen) +int z_vrfy_zsock_getsockopt(int sock, int level, int optname, + void *optval, socklen_t *optlen) { socklen_t kernel_optlen = *(socklen_t *)optlen; void *kernel_optval; @@ -1206,6 +1218,7 @@ Z_SYSCALL_HANDLER(zsock_getsockopt, sock, level, optname, optval, optlen) return ret; } +#include #endif /* CONFIG_USERSPACE */ int zsock_setsockopt_ctx(struct net_context *ctx, int level, int optname, @@ -1320,7 +1333,8 @@ int z_impl_zsock_setsockopt(int sock, int level, int optname, } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_setsockopt, sock, level, optname, optval, optlen) +int z_vrfy_zsock_setsockopt(int sock, int level, int optname, + const void *optval, socklen_t optlen) { void *kernel_optval; int ret; @@ -1335,6 +1349,7 @@ Z_SYSCALL_HANDLER(zsock_setsockopt, sock, level, optname, optval, optlen) return ret; } +#include #endif /* CONFIG_USERSPACE */ int zsock_getsockname_ctx(struct net_context *ctx, struct sockaddr *addr, diff --git a/subsys/net/lib/sockets/sockets_misc.c b/subsys/net/lib/sockets/sockets_misc.c index c2778839dca..a8f98eff4db 100644 --- a/subsys/net/lib/sockets/sockets_misc.c +++ b/subsys/net/lib/sockets/sockets_misc.c @@ -18,9 +18,10 @@ int z_impl_zsock_gethostname(char *buf, size_t len) } #ifdef CONFIG_USERSPACE -Z_SYSCALL_HANDLER(zsock_gethostname, buf, len) +static inline int z_vrfy_zsock_gethostname(char *buf, size_t len) { Z_OOPS(Z_SYSCALL_MEMORY_WRITE(buf, len)); - return z_impl_zsock_gethostname((char *)buf, len); + return z_impl_zsock_gethostname(buf, len); } +#include #endif diff --git a/tests/benchmarks/timing_info/src/userspace_bench.c b/tests/benchmarks/timing_info/src/userspace_bench.c index 425b7d7c852..01567b5532d 100644 --- a/tests/benchmarks/timing_info/src/userspace_bench.c +++ b/tests/benchmarks/timing_info/src/userspace_bench.c @@ -38,11 +38,12 @@ u32_t z_impl_userspace_read_timer_value(void) return TIMING_INFO_GET_TIMER_VALUE(); } -Z_SYSCALL_HANDLER(userspace_read_timer_value) +static inline u32_t z_vrfy_userspace_read_timer_value(void) { TIMING_INFO_PRE_READ(); return TIMING_INFO_GET_TIMER_VALUE(); } +#include /******************************************************************************/ @@ -163,12 +164,13 @@ int z_impl_k_dummy_syscall(void) return 0; } -Z_SYSCALL_HANDLER(k_dummy_syscall) +static inline int z_vrfy_k_dummy_syscall(void) { TIMING_INFO_PRE_READ(); syscall_overhead_end_time = TIMING_INFO_GET_TIMER_VALUE(); return 0; } +#include void syscall_overhead_user_thread(void *p1, void *p2, void *p3) @@ -215,7 +217,7 @@ int z_impl_validation_overhead_syscall(void) return 0; } -Z_SYSCALL_HANDLER(validation_overhead_syscall) +static inline int z_vrfy_validation_overhead_syscall(void) { TIMING_INFO_PRE_READ(); validation_overhead_obj_init_start_time = TIMING_INFO_GET_TIMER_VALUE(); @@ -235,7 +237,7 @@ Z_SYSCALL_HANDLER(validation_overhead_syscall) validation_overhead_obj_end_time = TIMING_INFO_GET_TIMER_VALUE(); return status_0 || status_1; } - +#include void validation_overhead_user_thread(void *p1, void *p2, void *p3) { diff --git a/tests/kernel/fatal/src/main.c b/tests/kernel/fatal/src/main.c index 2d607a98f9e..e85b8d9cfd5 100644 --- a/tests/kernel/fatal/src/main.c +++ b/tests/kernel/fatal/src/main.c @@ -156,7 +156,11 @@ void z_impl_blow_up_priv_stack(void) blow_up_stack(); } -Z_SYSCALL_HANDLER0_SIMPLE_VOID(blow_up_priv_stack); +static inline void z_vrfy_blow_up_priv_stack(void) +{ + z_impl_blow_up_priv_stack(); +} +#include #endif /* CONFIG_USERSPACE */ #endif /* CONFIG_STACK_SENTINEL */ diff --git a/tests/kernel/mem_protect/syscalls/src/main.c b/tests/kernel/mem_protect/syscalls/src/main.c index ddf6a384630..3a72a13b6d1 100644 --- a/tests/kernel/mem_protect/syscalls/src/main.c +++ b/tests/kernel/mem_protect/syscalls/src/main.c @@ -20,7 +20,7 @@ size_t z_impl_string_nlen(char *src, size_t maxlen, int *err) return z_user_string_nlen(src, maxlen, err); } -Z_SYSCALL_HANDLER(string_nlen, src, maxlen, err) +static inline size_t z_vrfy_string_nlen(char *src, size_t maxlen, int *err) { int err_copy; size_t ret; @@ -34,6 +34,7 @@ Z_SYSCALL_HANDLER(string_nlen, src, maxlen, err) return ret; } +#include int z_impl_string_alloc_copy(char *src) { @@ -44,7 +45,7 @@ int z_impl_string_alloc_copy(char *src) } } -Z_SYSCALL_HANDLER(string_alloc_copy, src) +static inline int z_vrfy_string_alloc_copy(char *src) { char *src_copy; int ret; @@ -59,6 +60,7 @@ Z_SYSCALL_HANDLER(string_alloc_copy, src) return ret; } +#include int z_impl_string_copy(char *src) { @@ -69,7 +71,7 @@ int z_impl_string_copy(char *src) } } -Z_SYSCALL_HANDLER(string_copy, src) +static inline int z_vrfy_string_copy(char *src) { int ret = z_user_string_copy(kernel_buf, (char *)src, BUF_SIZE); @@ -79,6 +81,7 @@ Z_SYSCALL_HANDLER(string_copy, src) return z_impl_string_copy(kernel_buf); } +#include /* Not actually used, but will copy wrong string if called by mistake instead * of the handler @@ -89,10 +92,11 @@ int z_impl_to_copy(char *dest) return 0; } -Z_SYSCALL_HANDLER(to_copy, dest) +static inline int z_vrfy_to_copy(char *dest) { return z_user_to_copy((char *)dest, user_string, BUF_SIZE); } +#include /** * @brief Test to demonstrate usage of z_user_string_nlen() diff --git a/tests/kernel/mem_protect/userspace/src/main.c b/tests/kernel/mem_protect/userspace/src/main.c index e83d33bb974..5ec692bfd07 100644 --- a/tests/kernel/mem_protect/userspace/src/main.c +++ b/tests/kernel/mem_protect/userspace/src/main.c @@ -899,25 +899,25 @@ void z_impl_stack_info_get(u32_t *start_addr, u32_t *size) *size = k_current_get()->stack_info.size; } -Z_SYSCALL_HANDLER(stack_info_get, start_addr, size) +static inline void z_vrfy_stack_info_get(u32_t *start_addr, u32_t *size) { Z_OOPS(Z_SYSCALL_MEMORY_WRITE(start_addr, sizeof(u32_t))); Z_OOPS(Z_SYSCALL_MEMORY_WRITE(size, sizeof(u32_t))); z_impl_stack_info_get((u32_t *)start_addr, (u32_t *)size); - - return 0; } +#include int z_impl_check_perms(void *addr, size_t size, int write) { return z_arch_buffer_validate(addr, size, write); } -Z_SYSCALL_HANDLER(check_perms, addr, size, write) +static inline int z_vrfy_check_perms(void *addr, size_t size, int write) { return z_impl_check_perms((void *)addr, size, write); } +#include void stack_buffer_scenarios(k_thread_stack_t *stack_obj, size_t obj_size) {