2017-09-28 16:54:35 -07:00
|
|
|
#!/usr/bin/env python3
|
|
|
|
#
|
|
|
|
# Copyright (c) 2017 Intel Corporation
|
|
|
|
#
|
|
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
|
2019-03-11 14:45:43 -07:00
|
|
|
"""
|
|
|
|
Script to generate system call invocation macros
|
|
|
|
|
|
|
|
This script parses the system call metadata JSON file emitted by
|
|
|
|
parse_syscalls.py to create several files:
|
|
|
|
|
|
|
|
- A file containing weak aliases of any potentially unimplemented system calls,
|
|
|
|
as well as the system call dispatch table, which maps system call type IDs
|
|
|
|
to their handler functions.
|
|
|
|
|
2020-05-26 16:10:21 +03:00
|
|
|
- A header file defining the system call type IDs, as well as function
|
2019-03-11 14:45:43 -07:00
|
|
|
prototypes for all system call handler functions.
|
|
|
|
|
|
|
|
- A directory containing header files. Each header corresponds to a header
|
|
|
|
that was identified as containing system call declarations. These
|
|
|
|
generated headers contain the inline invocation functions for each system
|
|
|
|
call in that header.
|
|
|
|
"""
|
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
import sys
|
|
|
|
import re
|
|
|
|
import argparse
|
|
|
|
import os
|
2017-11-20 13:03:55 +01:00
|
|
|
import json
|
2017-09-28 16:54:35 -07:00
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
# Some kernel headers cannot include automated tracing without causing unintended recursion or
|
|
|
|
# other serious issues.
|
|
|
|
# These headers typically already have very specific tracing hooks for all relevant things
|
|
|
|
# written by hand so are excluded.
|
|
|
|
notracing = ["kernel.h", "errno_private.h"]
|
|
|
|
|
2020-05-27 11:26:57 -05:00
|
|
|
types64 = ["int64_t", "uint64_t"]
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
# The kernel linkage is complicated. These functions from
|
|
|
|
# userspace_handlers.c are present in the kernel .a library after
|
|
|
|
# userspace.c, which contains the weak fallbacks defined here. So the
|
|
|
|
# linker finds the weak one first and stops searching, and thus won't
|
|
|
|
# see the real implementation which should override. Yet changing the
|
|
|
|
# order runs afoul of a comment in CMakeLists.txt that the order is
|
|
|
|
# critical. These are core syscalls that won't ever be unconfigured,
|
|
|
|
# just disable the fallback mechanism as a simple workaround.
|
2019-09-19 10:30:43 -07:00
|
|
|
noweak = ["z_mrsh_k_object_release",
|
|
|
|
"z_mrsh_k_object_access_grant",
|
|
|
|
"z_mrsh_k_object_alloc"]
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
table_template = """/* auto-generated by gen_syscalls.py, don't edit */
|
|
|
|
|
2017-10-01 23:47:43 +03:00
|
|
|
/* Weak handler functions that get replaced by the real ones unless a system
|
|
|
|
* call is not implemented due to kernel configuration.
|
2017-09-28 16:54:35 -07:00
|
|
|
*/
|
|
|
|
%s
|
|
|
|
|
|
|
|
const _k_syscall_handler_t _k_syscall_table[K_SYSCALL_LIMIT] = {
|
|
|
|
\t%s
|
|
|
|
};
|
|
|
|
"""
|
|
|
|
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
list_template = """/* auto-generated by gen_syscalls.py, don't edit */
|
|
|
|
|
2018-09-13 15:06:35 -07:00
|
|
|
#ifndef ZEPHYR_SYSCALL_LIST_H
|
|
|
|
#define ZEPHYR_SYSCALL_LIST_H
|
2017-09-28 16:54:35 -07:00
|
|
|
|
2018-08-10 15:23:00 +02:00
|
|
|
%s
|
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
#ifndef _ASMLANGUAGE
|
|
|
|
|
2019-11-05 09:27:18 -08:00
|
|
|
#include <stdint.h>
|
2018-03-26 13:44:44 -07:00
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
#endif /* _ASMLANGUAGE */
|
|
|
|
|
2018-09-13 15:06:35 -07:00
|
|
|
#endif /* ZEPHYR_SYSCALL_LIST_H */
|
2017-09-28 16:54:35 -07:00
|
|
|
"""
|
|
|
|
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
syscall_template = """/* auto-generated by gen_syscalls.py, don't edit */
|
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
{include_guard}
|
|
|
|
|
|
|
|
{tracing_include}
|
2017-09-28 16:54:35 -07:00
|
|
|
|
|
|
|
#ifndef _ASMLANGUAGE
|
|
|
|
|
|
|
|
#include <syscall_list.h>
|
2019-12-03 09:09:07 -08:00
|
|
|
#include <syscall.h>
|
2017-09-28 16:54:35 -07:00
|
|
|
|
2021-07-22 13:13:17 -07:00
|
|
|
#include <linker/sections.h>
|
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
#ifdef __cplusplus
|
2021-10-13 12:16:39 -05:00
|
|
|
extern "C" {{
|
2017-09-28 16:54:35 -07:00
|
|
|
#endif
|
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
{invocations}
|
2017-09-28 16:54:35 -07:00
|
|
|
|
|
|
|
#ifdef __cplusplus
|
2021-10-13 12:16:39 -05:00
|
|
|
}}
|
2017-09-28 16:54:35 -07:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#endif
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
#endif /* include guard */
|
2017-09-28 16:54:35 -07:00
|
|
|
"""
|
|
|
|
|
|
|
|
handler_template = """
|
2019-11-05 09:27:18 -08:00
|
|
|
extern uintptr_t z_hdlr_%s(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3,
|
|
|
|
uintptr_t arg4, uintptr_t arg5, uintptr_t arg6, void *ssf);
|
2017-09-28 16:54:35 -07:00
|
|
|
"""
|
|
|
|
|
|
|
|
weak_template = """
|
2018-03-06 15:08:55 -08:00
|
|
|
__weak ALIAS_OF(handler_no_syscall)
|
2019-11-05 09:27:18 -08:00
|
|
|
uintptr_t %s(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3,
|
|
|
|
uintptr_t arg4, uintptr_t arg5, uintptr_t arg6, void *ssf);
|
2017-09-28 16:54:35 -07:00
|
|
|
"""
|
|
|
|
|
2022-03-16 21:07:43 +00:00
|
|
|
# defines a macro wrapper which supersedes the syscall when used
|
2021-10-13 12:16:39 -05:00
|
|
|
# and provides tracing enter/exit hooks while allowing per compilation unit
|
|
|
|
# enable/disable of syscall tracing. Used for returning functions
|
|
|
|
# Note that the last argument to the exit macro is the return value.
|
|
|
|
syscall_tracer_with_return_template = """
|
2021-10-26 12:15:31 +02:00
|
|
|
#if (CONFIG_TRACING_SYSCALL == 1)
|
2021-10-13 12:16:39 -05:00
|
|
|
#ifndef DISABLE_SYSCALL_TRACING
|
|
|
|
{trace_diagnostic}
|
|
|
|
#define {func_name}({argnames}) ({{ \
|
|
|
|
{func_type} retval; \
|
|
|
|
sys_port_trace_syscall_enter({syscall_id}, {func_name}{trace_argnames}); \
|
|
|
|
retval = {func_name}({argnames}); \
|
|
|
|
sys_port_trace_syscall_exit({syscall_id}, {func_name}{trace_argnames}, retval); \
|
|
|
|
retval; \
|
|
|
|
}})
|
|
|
|
#endif
|
2021-10-26 12:15:31 +02:00
|
|
|
#endif
|
2021-10-13 12:16:39 -05:00
|
|
|
"""
|
|
|
|
|
2022-03-16 21:07:43 +00:00
|
|
|
# defines a macro wrapper which supersedes the syscall when used
|
2021-10-13 12:16:39 -05:00
|
|
|
# and provides tracing enter/exit hooks while allowing per compilation unit
|
|
|
|
# enable/disable of syscall tracing. Used for non-returning (void) functions
|
|
|
|
syscall_tracer_void_template = """
|
2021-10-26 12:15:31 +02:00
|
|
|
#if (CONFIG_TRACING_SYSCALL == 1)
|
2021-10-13 12:16:39 -05:00
|
|
|
#ifndef DISABLE_SYSCALL_TRACING
|
|
|
|
{trace_diagnostic}
|
|
|
|
#define {func_name}({argnames}) do {{ \
|
|
|
|
sys_port_trace_syscall_enter({syscall_id}, {func_name}{trace_argnames}); \
|
|
|
|
{func_name}({argnames}); \
|
|
|
|
sys_port_trace_syscall_exit({syscall_id}, {func_name}{trace_argnames}); \
|
|
|
|
}} while(false)
|
|
|
|
#endif
|
2021-10-26 12:15:31 +02:00
|
|
|
#endif
|
2021-10-13 12:16:39 -05:00
|
|
|
"""
|
2017-09-28 16:54:35 -07:00
|
|
|
|
2018-07-23 17:27:57 -07:00
|
|
|
typename_regex = re.compile(r'(.*?)([A-Za-z0-9_]+)$')
|
|
|
|
|
|
|
|
|
|
|
|
class SyscallParseException(Exception):
|
|
|
|
pass
|
|
|
|
|
|
|
|
|
|
|
|
def typename_split(item):
|
|
|
|
if "[" in item:
|
|
|
|
raise SyscallParseException(
|
|
|
|
"Please pass arrays to syscalls as pointers, unable to process '%s'" %
|
|
|
|
item)
|
|
|
|
|
|
|
|
if "(" in item:
|
|
|
|
raise SyscallParseException(
|
|
|
|
"Please use typedefs for function pointers")
|
|
|
|
|
|
|
|
mo = typename_regex.match(item)
|
|
|
|
if not mo:
|
|
|
|
raise SyscallParseException("Malformed system call invocation")
|
|
|
|
|
|
|
|
m = mo.groups()
|
|
|
|
return (m[0].strip(), m[1])
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
def need_split(argtype):
|
2019-11-05 09:39:05 -08:00
|
|
|
return (not args.long_registers) and (argtype in types64)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
# Note: "lo" and "hi" are named in little endian conventions,
|
|
|
|
# but it doesn't matter as long as they are consistently
|
|
|
|
# generated.
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
def union_decl(type, split):
|
|
|
|
middle = "struct { uintptr_t lo, hi; } split" if split else "uintptr_t x"
|
|
|
|
return "union { %s; %s val; }" % (middle, type)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
def wrapper_defs(func_name, func_type, args, fn):
|
2019-11-05 09:39:05 -08:00
|
|
|
ret64 = need_split(func_type)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
mrsh_args = [] # List of rvalue expressions for the marshalled invocation
|
|
|
|
|
2021-03-04 08:21:14 -08:00
|
|
|
decl_arglist = ", ".join([" ".join(argrec) for argrec in args]) or "void"
|
2021-10-13 12:16:39 -05:00
|
|
|
syscall_id = "K_SYSCALL_" + func_name.upper()
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
wrap = "extern %s z_impl_%s(%s);\n" % (func_type, func_name, decl_arglist)
|
2021-07-22 13:13:17 -07:00
|
|
|
wrap += "\n"
|
|
|
|
wrap += "__pinned_func\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
wrap += "static inline %s %s(%s)\n" % (func_type, func_name, decl_arglist)
|
|
|
|
wrap += "{\n"
|
|
|
|
wrap += "#ifdef CONFIG_USERSPACE\n"
|
2020-05-27 11:26:57 -05:00
|
|
|
wrap += ("\t" + "uint64_t ret64;\n") if ret64 else ""
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
wrap += "\t" + "if (z_syscall_trap()) {\n"
|
|
|
|
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
valist_args = []
|
|
|
|
for argnum, (argtype, argname) in enumerate(args):
|
|
|
|
split = need_split(argtype)
|
|
|
|
wrap += "\t\t%s parm%d" % (union_decl(argtype, split), argnum)
|
|
|
|
if argtype != "va_list":
|
|
|
|
wrap += " = { .val = %s };\n" % argname
|
|
|
|
else:
|
|
|
|
# va_list objects are ... peculiar.
|
|
|
|
wrap += ";\n" + "\t\t" + "va_copy(parm%d.val, %s);\n" % (argnum, argname)
|
|
|
|
valist_args.append("parm%d.val" % argnum)
|
|
|
|
if split:
|
|
|
|
mrsh_args.append("parm%d.split.lo" % argnum)
|
|
|
|
mrsh_args.append("parm%d.split.hi" % argnum)
|
|
|
|
else:
|
|
|
|
mrsh_args.append("parm%d.x" % argnum)
|
|
|
|
|
|
|
|
if ret64:
|
|
|
|
mrsh_args.append("(uintptr_t)&ret64")
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
if len(mrsh_args) > 6:
|
2019-11-05 09:27:18 -08:00
|
|
|
wrap += "\t\t" + "uintptr_t more[] = {\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
wrap += "\t\t\t" + (",\n\t\t\t".join(mrsh_args[5:])) + "\n"
|
|
|
|
wrap += "\t\t" + "};\n"
|
2019-11-05 09:27:18 -08:00
|
|
|
mrsh_args[5:] = ["(uintptr_t) &more"]
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
2019-11-07 12:43:29 -08:00
|
|
|
invoke = ("arch_syscall_invoke%d(%s)"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
% (len(mrsh_args),
|
|
|
|
", ".join(mrsh_args + [syscall_id])))
|
|
|
|
|
|
|
|
if ret64:
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
invoke = "\t\t" + "(void) %s;\n" % invoke
|
|
|
|
retcode = "\t\t" + "return (%s) ret64;\n" % func_type
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
elif func_type == "void":
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
invoke = "\t\t" + "(void) %s;\n" % invoke
|
|
|
|
retcode = "\t\t" + "return;\n"
|
|
|
|
elif valist_args:
|
|
|
|
invoke = "\t\t" + "%s retval = %s;\n" % (func_type, invoke)
|
|
|
|
retcode = "\t\t" + "return retval;\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
else:
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
invoke = "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
|
|
|
|
retcode = ""
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
wrap += invoke
|
|
|
|
for argname in valist_args:
|
|
|
|
wrap += "\t\t" + "va_end(%s);\n" % argname
|
|
|
|
wrap += retcode
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
wrap += "\t" + "}\n"
|
|
|
|
wrap += "#endif\n"
|
|
|
|
|
|
|
|
# Otherwise fall through to direct invocation of the impl func.
|
|
|
|
# Note the compiler barrier: that is required to prevent code from
|
|
|
|
# the impl call from being hoisted above the check for user
|
|
|
|
# context.
|
|
|
|
impl_arglist = ", ".join([argrec[1] for argrec in args])
|
|
|
|
impl_call = "z_impl_%s(%s)" % (func_name, impl_arglist)
|
|
|
|
wrap += "\t" + "compiler_barrier();\n"
|
|
|
|
wrap += "\t" + "%s%s;\n" % ("return " if func_type != "void" else "",
|
|
|
|
impl_call)
|
|
|
|
|
|
|
|
wrap += "}\n"
|
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
if fn not in notracing:
|
|
|
|
argnames = ", ".join([f"{argname}" for _, argname in args])
|
|
|
|
trace_argnames = ""
|
|
|
|
if len(args) > 0:
|
|
|
|
trace_argnames = ", " + argnames
|
|
|
|
trace_diagnostic = ""
|
|
|
|
if os.getenv('TRACE_DIAGNOSTICS'):
|
|
|
|
trace_diagnostic = f"#warning Tracing {func_name}"
|
|
|
|
if func_type != "void":
|
|
|
|
wrap += syscall_tracer_with_return_template.format(func_type=func_type, func_name=func_name,
|
|
|
|
argnames=argnames, trace_argnames=trace_argnames,
|
|
|
|
syscall_id=syscall_id, trace_diagnostic=trace_diagnostic)
|
|
|
|
else:
|
|
|
|
wrap += syscall_tracer_void_template.format(func_type=func_type, func_name=func_name,
|
|
|
|
argnames=argnames, trace_argnames=trace_argnames,
|
|
|
|
syscall_id=syscall_id, trace_diagnostic=trace_diagnostic)
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
return wrap
|
|
|
|
|
|
|
|
# Returns an expression for the specified (zero-indexed!) marshalled
|
|
|
|
# parameter to a syscall, with handling for a final "more" parameter.
|
|
|
|
def mrsh_rval(mrsh_num, total):
|
|
|
|
if mrsh_num < 5 or total <= 6:
|
|
|
|
return "arg%d" % mrsh_num
|
|
|
|
else:
|
2019-11-05 09:27:18 -08:00
|
|
|
return "(((uintptr_t *)more)[%d])" % (mrsh_num - 5)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
def marshall_defs(func_name, func_type, args):
|
|
|
|
mrsh_name = "z_mrsh_" + func_name
|
|
|
|
|
2019-11-05 09:27:18 -08:00
|
|
|
nmrsh = 0 # number of marshalled uintptr_t parameter
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
vrfy_parms = [] # list of (argtype, bool_is_split)
|
|
|
|
for (argtype, _) in args:
|
|
|
|
split = need_split(argtype)
|
|
|
|
vrfy_parms.append((argtype, split))
|
|
|
|
nmrsh += 2 if split else 1
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
# Final argument for a 64 bit return value?
|
2019-11-05 09:39:05 -08:00
|
|
|
if need_split(func_type):
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
nmrsh += 1
|
|
|
|
|
|
|
|
decl_arglist = ", ".join([" ".join(argrec) for argrec in args])
|
|
|
|
mrsh = "extern %s z_vrfy_%s(%s);\n" % (func_type, func_name, decl_arglist)
|
|
|
|
|
2019-11-05 09:27:18 -08:00
|
|
|
mrsh += "uintptr_t %s(uintptr_t arg0, uintptr_t arg1, uintptr_t arg2,\n" % mrsh_name
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
if nmrsh <= 6:
|
2019-11-05 09:27:18 -08:00
|
|
|
mrsh += "\t\t" + "uintptr_t arg3, uintptr_t arg4, uintptr_t arg5, void *ssf)\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
else:
|
2019-11-05 09:27:18 -08:00
|
|
|
mrsh += "\t\t" + "uintptr_t arg3, uintptr_t arg4, void *more, void *ssf)\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
mrsh += "{\n"
|
2020-02-06 13:39:03 -08:00
|
|
|
mrsh += "\t" + "_current->syscall_frame = ssf;\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
for unused_arg in range(nmrsh, 6):
|
|
|
|
mrsh += "\t(void) arg%d;\t/* unused */\n" % unused_arg
|
|
|
|
|
|
|
|
if nmrsh > 6:
|
|
|
|
mrsh += ("\tZ_OOPS(Z_SYSCALL_MEMORY_READ(more, "
|
2022-03-04 21:33:23 -05:00
|
|
|
+ str(nmrsh - 5) + " * sizeof(uintptr_t)));\n")
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
argnum = 0
|
|
|
|
for i, (argtype, split) in enumerate(vrfy_parms):
|
|
|
|
mrsh += "\t%s parm%d;\n" % (union_decl(argtype, split), i)
|
|
|
|
if split:
|
|
|
|
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
|
|
|
|
argnum += 1
|
|
|
|
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
else:
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
mrsh += "\t" + "parm%d.x = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
|
|
|
|
argnum += 1
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
# Finally, invoke the verify function
|
|
|
|
out_args = ", ".join(["parm%d.val" % i for i in range(len(args))])
|
|
|
|
vrfy_call = "z_vrfy_%s(%s)" % (func_name, out_args)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
if func_type == "void":
|
|
|
|
mrsh += "\t" + "%s;\n" % vrfy_call
|
2020-05-28 11:48:54 -07:00
|
|
|
mrsh += "\t" + "_current->syscall_frame = NULL;\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
mrsh += "\t" + "return 0;\n"
|
|
|
|
else:
|
|
|
|
mrsh += "\t" + "%s ret = %s;\n" % (func_type, vrfy_call)
|
2020-05-28 11:48:54 -07:00
|
|
|
|
2019-11-05 09:39:05 -08:00
|
|
|
if need_split(func_type):
|
2020-05-27 11:26:57 -05:00
|
|
|
ptr = "((uint64_t *)%s)" % mrsh_rval(nmrsh - 1, nmrsh)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
mrsh += "\t" + "Z_OOPS(Z_SYSCALL_MEMORY_WRITE(%s, 8));\n" % ptr
|
|
|
|
mrsh += "\t" + "*%s = ret;\n" % ptr
|
2020-05-28 11:48:54 -07:00
|
|
|
mrsh += "\t" + "_current->syscall_frame = NULL;\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
mrsh += "\t" + "return 0;\n"
|
|
|
|
else:
|
2020-05-28 11:48:54 -07:00
|
|
|
mrsh += "\t" + "_current->syscall_frame = NULL;\n"
|
2019-11-05 09:27:18 -08:00
|
|
|
mrsh += "\t" + "return (uintptr_t) ret;\n"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
|
|
|
|
mrsh += "}\n"
|
|
|
|
|
|
|
|
return mrsh, mrsh_name
|
2018-07-23 17:27:57 -07:00
|
|
|
|
2021-10-13 12:16:39 -05:00
|
|
|
def analyze_fn(match_group, fn):
|
2018-07-23 17:27:57 -07:00
|
|
|
func, args = match_group
|
|
|
|
|
|
|
|
try:
|
|
|
|
if args == "void":
|
|
|
|
args = []
|
|
|
|
else:
|
|
|
|
args = [typename_split(a.strip()) for a in args.split(",")]
|
|
|
|
|
|
|
|
func_type, func_name = typename_split(func)
|
|
|
|
except SyscallParseException:
|
|
|
|
sys.stderr.write("In declaration of %s\n" % func)
|
|
|
|
raise
|
|
|
|
|
|
|
|
sys_id = "K_SYSCALL_" + func_name.upper()
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
marshaller = None
|
|
|
|
marshaller, handler = marshall_defs(func_name, func_type, args)
|
2021-10-13 12:16:39 -05:00
|
|
|
invocation = wrapper_defs(func_name, func_type, args, fn)
|
2018-07-23 17:27:57 -07:00
|
|
|
|
|
|
|
# Entry in _k_syscall_table
|
|
|
|
table_entry = "[%s] = %s" % (sys_id, handler)
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
return (handler, invocation, marshaller, sys_id, table_entry)
|
2018-07-23 17:27:57 -07:00
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
def parse_args():
|
|
|
|
global args
|
2017-12-12 08:19:25 -05:00
|
|
|
parser = argparse.ArgumentParser(
|
|
|
|
description=__doc__,
|
|
|
|
formatter_class=argparse.RawDescriptionHelpFormatter)
|
2017-09-28 16:54:35 -07:00
|
|
|
|
2017-11-20 13:03:55 +01:00
|
|
|
parser.add_argument("-i", "--json-file", required=True,
|
2017-12-12 08:19:25 -05:00
|
|
|
help="Read syscall information from json file")
|
2017-09-28 16:54:35 -07:00
|
|
|
parser.add_argument("-d", "--syscall-dispatch", required=True,
|
2017-12-12 08:19:25 -05:00
|
|
|
help="output C system call dispatch table file")
|
2018-07-23 18:10:15 -07:00
|
|
|
parser.add_argument("-l", "--syscall-list", required=True,
|
|
|
|
help="output C system call list header")
|
2017-09-28 16:54:35 -07:00
|
|
|
parser.add_argument("-o", "--base-output", required=True,
|
2017-12-12 08:19:25 -05:00
|
|
|
help="Base output directory for syscall macro headers")
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
parser.add_argument("-s", "--split-type", action="append",
|
2019-11-05 09:39:05 -08:00
|
|
|
help="A long type that must be split/marshalled on 32-bit systems")
|
|
|
|
parser.add_argument("-x", "--long-registers", action="store_true",
|
|
|
|
help="Indicates we are on system with 64-bit registers")
|
2017-09-28 16:54:35 -07:00
|
|
|
args = parser.parse_args()
|
|
|
|
|
|
|
|
|
|
|
|
def main():
|
|
|
|
parse_args()
|
|
|
|
|
2019-09-13 15:47:16 +02:00
|
|
|
if args.split_type is not None:
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
for t in args.split_type:
|
|
|
|
types64.append(t)
|
|
|
|
|
2017-11-20 13:03:55 +01:00
|
|
|
with open(args.json_file, 'r') as fd:
|
|
|
|
syscalls = json.load(fd)
|
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
invocations = {}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
mrsh_defs = {}
|
|
|
|
mrsh_includes = {}
|
2017-09-28 16:54:35 -07:00
|
|
|
ids = []
|
|
|
|
table_entries = []
|
|
|
|
handlers = []
|
|
|
|
|
2018-07-23 17:27:57 -07:00
|
|
|
for match_group, fn in syscalls:
|
2021-10-13 12:16:39 -05:00
|
|
|
handler, inv, mrsh, sys_id, entry = analyze_fn(match_group, fn)
|
2018-07-23 17:27:57 -07:00
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
if fn not in invocations:
|
|
|
|
invocations[fn] = []
|
|
|
|
|
|
|
|
invocations[fn].append(inv)
|
|
|
|
ids.append(sys_id)
|
|
|
|
table_entries.append(entry)
|
|
|
|
handlers.append(handler)
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
if mrsh:
|
|
|
|
syscall = typename_split(match_group[0])[1]
|
|
|
|
mrsh_defs[syscall] = mrsh
|
|
|
|
mrsh_includes[syscall] = "#include <syscalls/%s>" % fn
|
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
with open(args.syscall_dispatch, "w") as fp:
|
2018-03-06 15:08:55 -08:00
|
|
|
table_entries.append("[K_SYSCALL_BAD] = handler_bad_syscall")
|
2017-09-28 16:54:35 -07:00
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
weak_defines = "".join([weak_template % name
|
|
|
|
for name in handlers
|
|
|
|
if not name in noweak])
|
|
|
|
|
|
|
|
# The "noweak" ones just get a regular declaration
|
2019-11-05 09:27:18 -08:00
|
|
|
weak_defines += "\n".join(["extern uintptr_t %s(uintptr_t arg1, uintptr_t arg2, uintptr_t arg3, uintptr_t arg4, uintptr_t arg5, uintptr_t arg6, void *ssf);"
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
% s for s in noweak])
|
2017-09-28 16:54:35 -07:00
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
fp.write(table_template % (weak_defines,
|
|
|
|
",\n\t".join(table_entries)))
|
2017-09-28 16:54:35 -07:00
|
|
|
|
|
|
|
# Listing header emitted to stdout
|
|
|
|
ids.sort()
|
|
|
|
ids.extend(["K_SYSCALL_BAD", "K_SYSCALL_LIMIT"])
|
2018-08-10 15:23:00 +02:00
|
|
|
|
|
|
|
ids_as_defines = ""
|
|
|
|
for i, item in enumerate(ids):
|
|
|
|
ids_as_defines += "#define {} {}\n".format(item, i)
|
|
|
|
|
2018-07-23 18:10:15 -07:00
|
|
|
with open(args.syscall_list, "w") as fp:
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
fp.write(list_template % ids_as_defines)
|
2017-09-28 16:54:35 -07:00
|
|
|
|
|
|
|
os.makedirs(args.base_output, exist_ok=True)
|
|
|
|
for fn, invo_list in invocations.items():
|
|
|
|
out_fn = os.path.join(args.base_output, fn)
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
ig = re.sub("[^a-zA-Z0-9]", "_", "Z_INCLUDE_SYSCALLS_" + fn).upper()
|
|
|
|
include_guard = "#ifndef %s\n#define %s\n" % (ig, ig)
|
2021-10-13 12:16:39 -05:00
|
|
|
tracing_include = ""
|
|
|
|
if fn not in notracing:
|
|
|
|
tracing_include = "#include <tracing/tracing_syscall.h>"
|
|
|
|
header = syscall_template.format(include_guard=include_guard, tracing_include=tracing_include, invocations="\n\n".join(invo_list))
|
2017-09-28 16:54:35 -07:00
|
|
|
|
|
|
|
with open(out_fn, "w") as fp:
|
|
|
|
fp.write(header)
|
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
# Likewise emit _mrsh.c files for syscall inclusion
|
|
|
|
for fn in mrsh_defs:
|
|
|
|
mrsh_fn = os.path.join(args.base_output, fn + "_mrsh.c")
|
|
|
|
|
|
|
|
with open(mrsh_fn, "w") as fp:
|
scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:
|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
| unsigned int initial_count,
| unsigned int limit)
|{
| 80000ad0: 6105 addi sp,sp,32
| 80000ad2: ec06 sd ra,24(sp)
| 80000ad4: e42a sd a0,8(sp)
| 80000ad6: c22e sw a1,4(sp)
| 80000ad8: c032 sw a2,0(sp)
| ret = arch_is_user_context();
| 80000ada: b39ff0ef jal ra,80000612
| if (z_syscall_trap()) {
| 80000ade: c911 beqz a0,80000af2
| return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
| *(uintptr_t *)&initial_count,
| *(uintptr_t *)&limit,
| K_SYSCALL_K_SEM_INIT);
| 80000ae0: 6522 ld a0,8(sp)
| 80000ae2: 00413583 ld a1,4(sp)
| 80000ae6: 6602 ld a2,0(sp)
| 80000ae8: 0b700693 li a3,183
| [...]
We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.
In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:
- The top half of a1 and a2 will contain garbage due to the `ld` used
to retrieve them. Whether or not the top bits will be cleared
eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
while `ld` expects a 8-byte alignment.
The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.
The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.
So let's fix this code by:
- Removing the pointer dereference roundtrip and associated casts. This
gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
aggregates perfectly well. The compiler does optimize things to the
same assembly output in the end.
This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-04 17:03:32 -05:00
|
|
|
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n\n")
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 13:34:31 -07:00
|
|
|
fp.write(mrsh_includes[fn] + "\n")
|
|
|
|
fp.write("\n")
|
|
|
|
fp.write(mrsh_defs[fn] + "\n")
|
2017-12-12 08:19:25 -05:00
|
|
|
|
2017-09-28 16:54:35 -07:00
|
|
|
if __name__ == "__main__":
|
|
|
|
main()
|