summaryrefslogtreecommitdiff
path: root/src/thread/synccall.c
AgeCommit message (Collapse)AuthorLines
2023-11-06synccall: add separate exit_sem to fix thread release logic bugMarkus Wichmann-3/+5
The code intends for the sem_post() in line 97 (now 98) to only unblock target threads waiting on line 29. But after the first thread is released, the next sem_post() might also unblock a thread waiting on line 36. That would cause the thread to return to the execution of user code before all threads are done, leading to user code being executed in a mixed-credentials environment. What's more, if this happens more than once, then the mass release on line 110 (now line 111) will cause multiple threads to execute the callback at the same time, and the callbacks are currently not written to cope with that situation. Adding another semaphore allows the caller to say explicitly which threads it wants to release.
2022-08-20use alt signal stack when present for implementation-internal signalsRich Felker-1/+1
a request for this behavior has been open for a long time. the motivation is that application code, particularly under some language runtimes designed around very-low-footprint coroutine type constructs, may be operating with extremely small stack sizes unsuitable for receiving signals, using a separate signal stack for any signals it might handle. progress on this was blocked at one point trying to determine whether the implementation is actually entitled to clobber the alt stack, but the phrasing "available to the implementation" in the POSIX spec for sigaltstack seems to make it clear that the application cannot rely on the contents of this memory to be preserved in the absence of signal delivery (on the abstract machine, excluding implementation-internal signals) and that we can therefore use it for delivery of signals that "don't exist" on the abstract machine. no change is made for SIGTIMER since it is always blocked when used, and accepted via sigwaitinfo rather than execution of the signal handler.
2020-09-17avoid set*id/setrlimit misbehavior and hang in vforked/cloned childRich Felker-1/+2
taking the deprecated/dropped vfork spec strictly, doing pretty much anything but execve in the child is wrong and undefined. however, these are commonly needed operations to setup the child state before exec, and historical implementations tolerated them. for single-threaded parents, these operations already worked as expected in the vforked child. however, due to the need for __synccall to synchronize id/resource limit changes among all threads, calling these functions in the vforked child of a multithreaded parent caused a misdirected broadcast signaling of all threads in the parent. these signals could kill the parent entirely if the synccall signal handler had never been installed in the parent, or could be ignored if it had, or could signal/kill one or more utterly wrong processes if the parent already terminated (due to vfork semantics, only possible via fatal signal) and the parent tids were recycled. in any case, the expected number of semaphore posts would never happen, so the child would permanently hang (with all signals blocked) waiting for them. to mitigate this, and also make the normal usage case work as intended, treat the condition where the caller's actual tid does not match the tid in its thread structure as single-threaded, and bypass the entire synccall broadcast operation.
2019-02-16rewrite __synccall in terms of global thread listRich Felker-119/+59
the __synccall mechanism provides stop-the-world synchronous execution of a callback in all threads of the process. it is used to implement multi-threaded setuid/setgid operations, since Linux lacks them at the kernel level, and for some other less-critical purposes. this change eliminates dependency on /proc/self/task to determine the set of live threads, which in addition to being an unwanted dependency and a potential point of resource-exhaustion failure, turned out to be inaccurate. test cases provided by Alexey Izbyshev showed that it could fail to reflect newly created threads. due to how the presignaling phase worked, this usually yielded a deadlock if hit, but in the worst case it could also result in threads being silently missed (allowed to continue running without executing the callback).
2018-09-12split internal lock API out of libc.h, creating lock.hRich Felker-0/+1
this further reduces the number of source files which need to include libc.h and thereby be potentially exposed to libc global state and internals. this will also facilitate further improvements like adding an inline fast-path, if we want to do so later.
2018-01-09revise the definition of multiple basic locks in the codeJens Gustedt-1/+1
In all cases this is just a change from two volatile int to one.
2017-01-19fix spurious EINTR errors from multithreaded set*id, etc.Rich Felker-1/+1
commit 78a8ef47c4d92b7680c52a85f80a81e29da86bb9 inadvertently removed the SA_RESTART flag from the sigaction for the internal signal handler used by __synccall for broadcasting. as a result, programs which did not use interrupting signals but which used set*id() in a multithreaded context could wrongly observe EINTR errors they're not prepared to handle.
2015-03-03make all objects used with atomic operations volatileRich Felker-2/+2
the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
2015-01-15overhaul __synccall and fix AS-safety and other issues in set*idRich Felker-45/+135
multi-threaded set*id and setrlimit use the internal __synccall function to work around the kernel's wrongful treatment of these process properties as thread-local. the old implementation of __synccall failed to be AS-safe, despite POSIX requiring setuid and setgid to be AS-safe, and was not rigorous in assuring that all threads were caught. in a worst case, threads late in the process of exiting could retain permissions after setuid reported success, in which case attacks to regain dropped permissions may have been possible under the right conditions. the new implementation of __synccall depends on the presence of /proc/self/task and will fail if it can't be opened, but is able to determine that it has caught all threads, and does not use any locks except its own. it thereby achieves AS-safety simply by blocking signals to preclude re-entry in the same thread. with this commit, all known conformance and safety issues in set*id functions should be fixed.
2014-07-05eliminate use of cached pid from thread structureRich Felker-5/+3
the main motivation for this change is to remove the assumption that the tid of the main thread is also the pid of the process. (the value returned by the set_tid_address syscall was used to fill both fields despite it semantically being the tid.) this is historically and presently true on linux and unlikely to change, but it conceivably could be false on other systems that otherwise reproduce the linux syscall api/abi. only a few parts of the code were actually still using the cached pid. in a couple places (aio and synccall) it was a minor optimization to avoid a syscall. caching could be reintroduced, but lazily as part of the public getpid function rather than at program startup, if it's deemed important for performance later. in other places (cancellation and pthread_kill) the pid was completely unnecessary; the tkill syscall can be used instead of tgkill. this is actually a rather subtle issue, since tgkill is supposedly a solution to race conditions that can affect use of tkill. however, as documented in the commit message for commit 7779dbd2663269b465951189b4f43e70839bc073, tgkill does not actually solve this race; it just limits it to happening within one process rather than between processes. we use a lock that avoids the race in pthread_kill, and the use in the cancellation signal handler is self-targeted and thus not subject to tid reuse races, so both are safe regardless of which syscall (tgkill or tkill) is used.
2013-12-12include cleanups: remove unused headers and add feature test macrosSzabolcs Nagy-1/+0
2013-09-02fix mips-specific bug in synccall (too little space for signal mask)Rich Felker-5/+3
switch to the new __block_all_sigs/__restore_sigs internal API to clean up the code too.
2013-09-02in synccall, ignore the signal before any threads' signal handlers returnRich Felker-4/+4
this protects against deadlock from spurious signals (e.g. sent by another process) arriving after the controlling thread releases the other threads from the sync operation.
2013-09-02fix invalid pointer in synccall (multithread setuid, etc.)Rich Felker-0/+1
the head pointer was not being reset between calls to synccall, so any use of this interface more than once would build the linked list incorrectly, keeping the (now invalid) list nodes from the previous call.
2013-04-26synccall signal handler need not handle dead threads anymoreRich Felker-9/+0
they have already blocked signals before decrementing the thread count, so the code being removed is unreachable in the case where the thread is no longer counted.
2013-03-26remove __SYSCALL_SSLEN arch macro in favor of using public _NSIGRich Felker-2/+2
the issue at hand is that many syscalls require as an argument the kernel-ABI size of sigset_t, intended to allow the kernel to switch to a larger sigset_t in the future. previously, each arch was defining this size in syscall_arch.h, which was redundant with the definition of _NSIG in bits/signal.h. as it's used in some not-quite-portable application code as well, _NSIG is much more likely to be recognized and understood immediately by someone reading the code, and it's also shorter and less cluttered. note that _NSIG is actually 65/129, not 64/128, but the division takes care of throwing away the off-by-one part.
2012-11-08clean up sloppy nested inclusion from pthread_impl.hRich Felker-0/+1
this mirrors the stdio_impl.h cleanup. one header which is not strictly needed, errno.h, is left in pthread_impl.h, because since pthread functions return their error codes rather than using errno, nearly every single pthread function needs the errno constants. in a few places, rather than bringing in string.h to use memset, the memset was replaced by direct assignment. this seems to generate much better code anyway, and makes many functions which were previously non-leaf functions into leaf functions (possibly eliminating a great deal of bloat on some platforms where non-leaf functions require ugly prologue and/or epilogue).
2012-10-05support for TLS in dynamic-loaded (dlopen) modulesRich Felker-13/+2
unlike other implementations, this one reserves memory for new TLS in all pre-existing threads at dlopen-time, and dlopen will fail with no resources consumed and no new libraries loaded if memory is not available. memory is not immediately distributed to running threads; that would be too complex and too costly. instead, assurances are made that threads needing the new TLS can obtain it in an async-signal-safe way from a buffer belonging to the dynamic linker/new module (via atomic fetch-and-add based allocator). I've re-appropriated the lock that was previously used for __synccall (synchronizing set*id() syscalls between threads) as a general pthread_create lock. it's a "backwards" rwlock where the "read" operation is safe atomic modification of the live thread count, which multiple threads can perform at the same time, and the "write" operation is making sure the count does not increase during an operation that depends on it remaining bounded (__synccall or dlopen). in static-linked programs that don't use __synccall, this lock is a no-op and has no cost.
2012-08-09fix (hopefully) all hard-coded 8's for kernel sigset_t sizeRich Felker-2/+4
some minor changes to how hard-coded sets for thread-related purposes are handled were also needed, since the old object sizes were not necessarily sufficient. things have gotten a bit ugly in this area, and i think a cleanup is in order at some point, but for now the goal is just to get the code working on all supported archs including mips, which was badly broken by linux rejecting syscalls with the wrong sigset_t size.
2012-05-22remove everything related to forkallRich Felker-8/+0
i made a best attempt, but the intended semantics of this function are fundamentally contradictory. there is no consistent way to handle ownership of locks when forking a multi-threaded process. the code could have worked by accident for programs that only used normal mutexes and nothing else (since they don't actually store or care about their owner), but that's about it. broken-by-design interfaces that aren't even in glibc (only solaris) don't belong in musl.
2011-08-12pthread and synccall cleanup, new __synccall_wait opRich Felker-2/+10
fix up clone signature to match the actual behavior. the new __syncall_wait function allows a __synccall callback to wait for other threads to continue without returning, so that it can resume action after the caller finishes. this interface could be made significantly more general/powerful with minimal effort, but i'll wait to do that until it's actually useful for something.
2011-07-30fix bug in synccall with no threads: lock was taken but never releasedRich Felker-4/+4
2011-07-29new attempt at making set*id() safe and robustRich Felker-0/+109
changing credentials in a multi-threaded program is extremely difficult on linux because it requires synchronizing the change between all threads, which have their own thread-local credentials on the kernel side. this is further complicated by the fact that changing the real uid can fail due to exceeding RLIMIT_NPROC, making it possible that the syscall will succeed in some threads but fail in others. the old __rsyscall approach being replaced was robust in that it would report failure if any one thread failed, but in this case, the program would be left in an inconsistent state where individual threads might have different uid. (this was not as bad as glibc, which would sometimes even fail to report the failure entirely!) the new approach being committed refuses to change real user id when it cannot temporarily set the rlimit to infinity. this is completely POSIX conformant since POSIX does not require an implementation to allow real-user-id changes for non-privileged processes whatsoever. still, setting the real uid can fail due to memory allocation in the kernel, but this can only happen if there is not already a cached object for the target user. thus, we forcibly serialize the syscalls attempts, and fail the entire operation on the first failure. this *should* lead to an all-or-nothing success/failure result, but it's still fragile and highly dependent on kernel developers not breaking things worse than they're already broken. ideally linux will eventually add a CLONE_USERCRED flag that would give POSIX conformant credential changes without any hacks from userspace, and all of this code would become redundant and could be removed ~10 years down the line when everyone has abandoned the old broken kernels. i'm not holding my breath...