userspace: get dynamic objs from thread rsrc pools

Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit is contained in:
Andrew Boie 2018-04-24 17:01:37 -07:00 committed by Andrew Boie
commit 97bf001f11
6 changed files with 124 additions and 30 deletions

View file

@ -107,12 +107,18 @@ config MAX_THREAD_BYTES
be created in the system.
config DYNAMIC_OBJECTS
bool "Allow kernel objects to be requested on system heap"
bool "Allow kernel objects to be allocated at runtime"
default n
depends on USERSPACE
help
Enabling this option allows for kernel objects to be requested from
the system heap, at a cost in performance and additional memory.
the calling thread's resource pool, at a slight cost in performance
due to the supplemental run-time tables required to validate such
objects.
Objects allocated in this way can be freed with a supervisor-only
API call, or when the number of references to that object drops to
zero.
config SIMPLE_FATAL_ERROR_HANDLER
prompt "Simple system fatal error handler"

View file

@ -26,6 +26,18 @@ a kernel object, checks are performed by system call handler functions
that the kernel object address is valid and that the calling thread
has sufficient permissions to work with it.
Permission on an object also has the semantics of a reference to an object.
This is significant for certain object APIs which do temporary allocations,
or objects which themselves have been allocated from a runtime memory pool.
If an object loses all references, two events may happen:
* If the object has an associated cleanup function, the cleanup function
may be called to release any runtime-allocated buffers the object was using.
* If the object itself was dynamically allocated, the memory for the object
will be freed.
Object Placement
****************
@ -34,8 +46,8 @@ and can be located anywhere in the binary, or even declared on stacks. However,
to prevent accidental or intentional corruption by user threads, they must
not be located in any memory that user threads have direct access to.
In order for a kernel object to be usable by a user thread via system call
APIs, several conditions must be met on how the kernel object is declared:
In order for a static kernel object to be usable by a user thread via system
call APIs, several conditions must be met on how the kernel object is declared:
* The object must be declared as a top-level global at build time, such that it
appears in the ELF symbol table. It is permitted to declare kernel objects
@ -68,6 +80,30 @@ debugging why some object was unexpectedly not being tracked. This
information will be printed if the script is run with the ``--verbose`` flag,
or if the build system is invoked with verbose output.
Dynamic Objects
***************
Kernel objects may also be allocated at runtime if
:option:`CONFIG_DYNAMIC_OBJECTS` is enabled. In this case, the
:cpp:func:`k_object_alloc()` API may be used to instantiate an object from
the calling thread's resource pool. Such allocations may be freed in two
ways:
* Supervisor threads may call :cpp:func:`k_object_free()` to force a dynamic
object to be released.
* If an object's references drop to zero (which happens when no threads have
permissions on it) the object will be automatically freed. User threads
may drop their own permission on an object with
:cpp:func:`k_object_release()`, and their permissions are automatically
cleared when a thread terminates. Supervisor threads may additionally
revoke references for another thread using
:cpp:func:`k_object_access_revoke()`.
Because permissions are also used for reference counting, it is important for
supervisor threads to acquire permissions on objects they are using even though
the access control aspects of the permission system are not enforced.
Implementation Details
======================
@ -105,6 +141,9 @@ includes:
to denote how large the stack is, and for thread objects to indicate
the thread's index in kernel object permission bitfields.
Dynamic objects allocated at runtime are tracked in a runtime red/black tree
which is used in parallel to the gperf table when validating object pointers.
Supervisor Thread Access Permission
***********************************
@ -161,6 +200,9 @@ API calls from supervisor mode to set permissions on kernel objects that are
not being tracked by the kernel will be no-ops. Doing the same from user mode
will result in a fatal error for the calling thread.
Objects allocated with :cpp:func:`k_object_alloc()` implicitly grant
permission on the allocated object to the calling thread.
Initialization State
********************
@ -241,6 +283,9 @@ APIs
* :c:func:`k_object_access_grant()`
* :c:func:`k_object_access_revoke()`
* :c:func:`k_object_access_all_grant()`
* :c:func:`k_object_alloc()`
* :c:func:`k_object_free()`
* :c:func:`k_object_release()`
* :c:func:`k_thread_access_grant()`
* :c:func:`k_thread_user_mode_enter()`
* :c:macro:`K_THREAD_ACCESS_GRANT()`

View file

@ -187,6 +187,7 @@ struct _k_object_assignment {
#define K_OBJ_FLAG_INITIALIZED BIT(0)
#define K_OBJ_FLAG_PUBLIC BIT(1)
#define K_OBJ_FLAG_ALLOC BIT(2)
/**
* Lookup a kernel object and init its metadata if it exists
@ -290,14 +291,13 @@ __syscall void k_object_release(void *object);
*/
void k_object_access_all_grant(void *object);
#ifdef CONFIG_DYNAMIC_OBJECTS
/**
* Allocate a kernel object of a designated type
*
* This will instantiate at runtime a kernel object of the specified type,
* returning a pointer to it. The object will be returned in an uninitialized
* state, with the calling thread being granted permission on it. The memory
* for the object will be allocated out of the kernel's heap.
* for the object will be allocated out of the calling thread's resource pool.
*
* Currently, allocation of thread stacks is not supported.
*
@ -305,18 +305,29 @@ void k_object_access_all_grant(void *object);
* @return A pointer to the allocated kernel object, or NULL if memory wasn't
* available
*/
void *k_object_alloc(enum k_objects otype);
__syscall void *k_object_alloc(enum k_objects otype);
#ifdef CONFIG_DYNAMIC_OBJECTS
/**
* Free a kernel object previously allocated with k_object_alloc()
*
* This will return memory for a kernel object back to the system heap.
* Care must be exercised that the object will not be used during or after
* when this call is made.
* This will return memory for a kernel object back to resource pool it was
* allocated from. Care must be exercised that the object will not be used
* during or after when this call is made.
*
* @param obj Pointer to the kernel object memory address.
*/
void k_object_free(void *obj);
#else
static inline void *_impl_k_object_alloc(enum k_objects otype)
{
return NULL;
}
static inline void k_obj_free(void *obj)
{
ARG_UNUSED(obj);
}
#endif /* CONFIG_DYNAMIC_OBJECTS */
/* Using typedef deliberately here, this is quite intended to be an opaque

View file

@ -50,25 +50,36 @@ struct perm_ctx {
#ifdef CONFIG_DYNAMIC_OBJECTS
struct dyn_obj {
struct _k_object kobj;
sys_dnode_t obj_list;
struct rbnode node; /* must be immediately before data member */
u8_t data[]; /* The object itself */
};
struct visit_ctx {
_wordlist_cb_func_t func;
void *original_context;
};
extern struct _k_object *_k_object_gperf_find(void *obj);
extern void _k_object_gperf_wordlist_foreach(_wordlist_cb_func_t func,
void *context);
static int node_lessthan(struct rbnode *a, struct rbnode *b);
/*
* Red/black tree of allocated kernel objects, for reasonably fast lookups
* based on object pointer values.
*/
static struct rbtree obj_rb_tree = {
.lessthan_fn = node_lessthan
};
/*
* Linked list of allocated kernel objects, for iteration over all allocated
* objects (and potentially deleting them during iteration).
*/
static sys_dlist_t obj_list = SYS_DLIST_STATIC_INIT(&obj_list);
/*
* TODO: Write some hash table code that will replace both obj_rb_tree
* and obj_list.
*/
/* TODO: incorporate auto-gen with Leandro's patch */
static size_t obj_size_get(enum k_objects otype)
{
@ -128,7 +139,7 @@ static struct dyn_obj *dyn_object_find(void *obj)
return ret;
}
void *k_object_alloc(enum k_objects otype)
void *_impl_k_object_alloc(enum k_objects otype)
{
struct dyn_obj *dyn_obj;
int key;
@ -140,7 +151,7 @@ void *k_object_alloc(enum k_objects otype)
otype != K_OBJ__THREAD_STACK_ELEMENT,
"bad object type requested");
dyn_obj = k_malloc(sizeof(*dyn_obj) + obj_size_get(otype));
dyn_obj = z_thread_malloc(sizeof(*dyn_obj) + obj_size_get(otype));
if (!dyn_obj) {
SYS_LOG_WRN("could not allocate kernel object");
return NULL;
@ -148,7 +159,7 @@ void *k_object_alloc(enum k_objects otype)
dyn_obj->kobj.name = (char *)&dyn_obj->data;
dyn_obj->kobj.type = otype;
dyn_obj->kobj.flags = 0;
dyn_obj->kobj.flags = K_OBJ_FLAG_ALLOC;
memset(dyn_obj->kobj.perms, 0, CONFIG_MAX_THREAD_BYTES);
/* The allocating thread implicitly gets permission on kernel objects
@ -158,6 +169,7 @@ void *k_object_alloc(enum k_objects otype)
key = irq_lock();
rb_insert(&obj_rb_tree, &dyn_obj->node);
sys_dlist_append(&obj_list, &dyn_obj->obj_list);
irq_unlock(key);
return dyn_obj->kobj.name;
@ -177,6 +189,7 @@ void k_object_free(void *obj)
dyn_obj = dyn_object_find(obj);
if (dyn_obj) {
rb_remove(&obj_rb_tree, &dyn_obj->node);
sys_dlist_remove(&dyn_obj->obj_list);
}
irq_unlock(key);
@ -203,25 +216,17 @@ struct _k_object *_k_object_find(void *obj)
return ret;
}
static void visit_fn(struct rbnode *node, void *context)
{
struct visit_ctx *vctx = context;
vctx->func(&node_to_dyn_obj(node)->kobj, vctx->original_context);
}
void _k_object_wordlist_foreach(_wordlist_cb_func_t func, void *context)
{
struct visit_ctx vctx;
int key;
struct dyn_obj *obj, *next;
_k_object_gperf_wordlist_foreach(func, context);
vctx.func = func;
vctx.original_context = context;
key = irq_lock();
rb_walk(&obj_rb_tree, visit_fn, &vctx);
SYS_DLIST_FOR_EACH_CONTAINER_SAFE(&obj_list, obj, next, obj_list) {
func(&obj->kobj, context);
}
irq_unlock(key);
}
#endif /* CONFIG_DYNAMIC_OBJECTS */
@ -256,6 +261,16 @@ static void unref_check(struct _k_object *ko)
default:
break;
}
#ifdef CONFIG_DYNAMIC_OBJECTS
if (ko->flags & K_OBJ_FLAG_ALLOC) {
struct dyn_obj *dyn_obj =
CONTAINER_OF(ko, struct dyn_obj, kobj);
rb_remove(&obj_rb_tree, &dyn_obj->node);
sys_dlist_remove(&dyn_obj->obj_list);
k_free(dyn_obj);
}
#endif
}
static void wordlist_cb(struct _k_object *ko, void *ctx_ptr)

View file

@ -58,3 +58,12 @@ _SYSCALL_HANDLER(k_object_release, object)
return 0;
}
_SYSCALL_HANDLER(k_object_alloc, otype)
{
_SYSCALL_VERIFY_MSG(otype > K_OBJ_ANY && otype < K_OBJ_LAST &&
otype != K_OBJ__THREAD_STACK_ELEMENT,
"bad object type %d requested", otype);
return (u32_t)_impl_k_object_alloc(otype);
}

View file

@ -65,6 +65,8 @@ void object_permission_checks(struct k_sem *sem, bool skip_init)
"object should have had sufficient permissions");
}
extern const k_tid_t _main_thread;
void test_generic_object(void)
{
struct k_sem stack_sem;
@ -84,6 +86,11 @@ void test_generic_object(void)
for (int i = 0; i < SEM_ARRAY_SIZE; i++) {
object_permission_checks(&semarray[i], false);
dyn_sem[i] = k_object_alloc(K_OBJ_SEM);
zassert_not_null(dyn_sem[i], "couldn't allocate semaphore\n");
/* Give an extra reference to another thread so the object
* doesn't disappear if we revoke our own
*/
k_object_access_grant(dyn_sem[i], _main_thread);
}
/* dynamic object table well-populated with semaphores at this point */
@ -99,6 +106,7 @@ void test_generic_object(void)
void test_main(void)
{
k_thread_system_pool_assign(k_current_get());
ztest_test_suite(object_validation,
ztest_unit_test(test_generic_object));
ztest_run_test_suite(object_validation);