summaryrefslogtreecommitdiff
path: root/src/process/fork.c
AgeCommit message (Collapse)AuthorLines
2019-07-01fix deadlock in synccall after threaded forkSamuel Holland-0/+1
synccall may be called by AS-safe functions such as setuid/setgid after fork. although fork() resets libc.threads_minus_one, causing synccall to take the single-threaded path, synccall still takes the thread list lock. This lock may be held by another thread if for example fork() races with pthread_create(). After fork(), the value of the lock is meaningless, so clear it. maintainer's note: commit 8f11e6127fe93093f81a52b15bb1537edc3fc8af and e4235d70672d9751d7718ddc2b52d0b426430768 introduced this regression. the state protected by this lock is the linked list, which is entirely replaced in the child path of fork (next=prev=self), so resetting it is semantically sound.
2019-02-15track all live threads in an AS-safe, fully-consistent linked listRich Felker-0/+1
the hard problem here is unlinking threads from a list when they exit without creating a window of inconsistency where the kernel task for a thread still exists and is still executing instructions in userspace, but is not reflected in the list. the magic solution here is getting rid of per-thread exit futex addresses (set_tid_address), and instead using the exit futex to unlock the global thread list. since pthread_join can no longer see the thread enter a detach_state of EXITED (which depended on the exit futex address pointing to the detach_state), it must now observe the unlocking of the thread list lock before it can unmap the joined thread and return. it doesn't actually have to take the lock. for this, a __tl_sync primitive is offered, with a signature that will allow it to be enhanced for quick return even under contention on the lock, if needed. for now, the exiting thread always performs a futex wake on its detach_state. a future change could optimize this out except when there is already a joiner waiting. initial/dynamic variants of detached state no longer need to be tracked separately, since the futex address is always set to the global list lock, not a thread-local address that could become invalid on detached thread exit. all detached threads, however, must perform a second sigprocmask syscall to block implementation-internal signals, since locking the thread list with them already blocked is not permissible. the arch-independent C version of __unmapself no longer needs to take a lock or setup its own futex address to release the lock, since it must necessarily be called with the thread list lock already held, guaranteeing exclusive access to the temporary stack. changes to libc.threads_minus_1 no longer need to be atomic, since they are guarded by the thread list lock. it is largely vestigial at this point, and can be replaced with a cheaper boolean indicating whether the process is multithreaded at some point in the future.
2017-11-10prevent fork's errno from being clobbered by atfork handlersBobby Bingham-3/+3
If the syscall fails, errno must be set correctly for the caller. There's no guarantee that the handlers registered with pthread_atfork won't clobber errno, so we need to ensure it gets set after they are called.
2015-04-13remove remnants of support for running in no-thread-pointer modeRich Felker-1/+1
since 1.1.0, musl has nominally required a thread pointer to be setup. most of the remaining code that was checking for its availability was doing so for the sake of being usable by the dynamic linker. as of commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer necessary; the thread pointer is now valid before any libc code (outside of dynamic linker bootstrap functions) runs. this commit essentially concludes "phase 3" of the "transition path for removing lazy init of thread pointer" project that began during the 1.1.0 release cycle.
2015-04-10optimize out setting up robust list with kernel when not neededRich Felker-1/+2
as a result of commit 12e1e324683a1d381b7f15dd36c99b37dd44d940, kernel processing of the robust list is only needed for process-shared mutexes. previously the first attempt to lock any owner-tracked mutex resulted in robust list initialization and a set_robust_list syscall. this is no longer necessary, and since the kernel's record of the robust list must now be cleared at thread exit time for detached threads, optimizing it out is more worthwhile than before too.
2014-07-05eliminate use of cached pid from thread structureRich Felker-1/+1
the main motivation for this change is to remove the assumption that the tid of the main thread is also the pid of the process. (the value returned by the set_tid_address syscall was used to fill both fields despite it semantically being the tid.) this is historically and presently true on linux and unlikely to change, but it conceivably could be false on other systems that otherwise reproduce the linux syscall api/abi. only a few parts of the code were actually still using the cached pid. in a couple places (aio and synccall) it was a minor optimization to avoid a syscall. caching could be reintroduced, but lazily as part of the public getpid function rather than at program startup, if it's deemed important for performance later. in other places (cancellation and pthread_kill) the pid was completely unnecessary; the tkill syscall can be used instead of tgkill. this is actually a rather subtle issue, since tgkill is supposedly a solution to race conditions that can affect use of tkill. however, as documented in the commit message for commit 7779dbd2663269b465951189b4f43e70839bc073, tgkill does not actually solve this race; it just limits it to happening within one process rather than between processes. we use a lock that avoids the race in pthread_kill, and the use in the cancellation signal handler is self-targeted and thus not subject to tid reuse races, so both are safe regardless of which syscall (tgkill or tkill) is used.
2014-05-29support linux kernel apis (new archs) with old syscalls removedRich Felker-0/+5
such archs are expected to omit definitions of the SYS_* macros for syscalls their kernels lack from arch/$ARCH/bits/syscall.h. the preprocessor is then able to select the an appropriate implementation for affected functions. two basic strategies are used on a case-by-case basis: where the old syscalls correspond to deprecated library-level functions, the deprecated functions have been converted to wrappers for the modern function, and the modern function has fallback code (omitted at the preprocessor level on new archs) to make use of the old syscalls if the new syscall fails with ENOSYS. this also improves functionality on older kernels and eliminates the incentive to program with deprecated library-level functions for the sake of compatibility with older kernels. in other situations where the old syscalls correspond to library-level functions which are not deprecated but merely lack some new features, such as the *at functions, the old syscalls are still used on archs which support them. this may change at some point in the future if or when fallback code is added to the new functions to make them usable (possibly with reduced functionality) on old kernels.
2014-03-24always initialize thread pointer at program startRich Felker-3/+2
this is the first step in an overhaul aimed at greatly simplifying and optimizing everything dealing with thread-local state. previously, the thread pointer was initialized lazily on first access, or at program startup if stack protector was in use, or at certain random places where inconsistent state could be reached if it were not initialized early. while believed to be fully correct, the logic was fragile and non-obvious. in the first phase of the thread pointer overhaul, support is retained (and in some cases improved) for systems/situation where loading the thread pointer fails, e.g. old kernels. some notes on specific changes: - the confusing use of libc.main_thread as an indicator that the thread pointer is initialized is eliminated in favor of an explicit has_thread_pointer predicate. - sigaction no longer needs to ensure that the thread pointer is initialized before installing a signal handler (this was needed to prevent a situation where the signal handler caused the thread pointer to be initialized and the subsequent sigreturn cleared it again) but it still needs to ensure that implementation-internal thread-related signals are not blocked. - pthread tsd initialization for the main thread is deferred in a new manner to minimize bloat in the static-linked __init_tp code. - pthread_setcancelstate no longer needs special handling for the situation before the thread pointer is initialized. it simply fails on systems that cannot support a thread pointer, which are non-conforming anyway. - pthread_cleanup_push/pop now check for missing thread pointer and nop themselves out in this case, so stdio no longer needs to avoid the cancellable path when the thread pointer is not available. a number of cases remain where certain interfaces may crash if the system does not support a thread pointer. at this point, these should be limited to pthread interfaces, and the number of such cases should be fewer than before.
2013-08-08block signals during forkRich Felker-0/+3
there are several reasons for this. some of them are related to race conditions that arise since fork is required to be async-signal-safe: if fork or pthread_create is called from a signal handler after the fork syscall has returned but before the subsequent userspace code has finished, inconsistent state could result. also, there seem to be kernel and/or strace bugs related to arrival of signals during fork, at least on some versions, and simply blocking signals eliminates the possibility of such bugs.
2012-11-08clean up sloppy nested inclusion from pthread_impl.hRich Felker-0/+1
this mirrors the stdio_impl.h cleanup. one header which is not strictly needed, errno.h, is left in pthread_impl.h, because since pthread functions return their error codes rather than using errno, nearly every single pthread function needs the errno constants. in a few places, rather than bringing in string.h to use memset, the memset was replaced by direct assignment. this seems to generate much better code anyway, and makes many functions which were previously non-leaf functions into leaf functions (possibly eliminating a great deal of bloat on some platforms where non-leaf functions require ugly prologue and/or epilogue).
2011-08-06use weak aliases rather than function pointers to simplify some codeRich Felker-2/+8
2011-07-16ensure in fork that child gets its own new robust mutex listRich Felker-0/+1
2011-04-20fix minor bugs due to incorrect threaded-predicate semanticsRich Felker-1/+2
some functions that should have been testing whether pthread_self() had been called and initialized the thread pointer were instead testing whether pthread_create() had been called and actually made the program "threaded". while it's unlikely any mismatch would occur in real-world problems, this could have introduced subtle bugs. now, we store the address of the main thread's thread descriptor in the libc structure and use its presence as a flag that the thread register is initialized. note that after fork, the calling thread (not necessarily the original main thread) is the new main thread.
2011-04-17clean up handling of thread/nothread mode, lockingRich Felker-1/+1
2011-04-12speed up threaded forkRich Felker-2/+1
after fork, we have a new process and the pid is equal to the tid of the new main thread. there is no need to make two separate syscalls to obtain the same number.
2011-03-20global cleanup to use the new syscall interfaceRich Felker-3/+3
2011-03-09make fork properly initialize the main thread in the child processRich Felker-0/+7
2011-02-18add pthread_atfork interfaceRich Felker-3/+6
note that this presently does not handle consistency of the libc's own global state during forking. as per POSIX 2008, if the parent process was threaded, the child process may only call async-signal-safe functions until one of the exec-family functions is called, so the current behavior is believed to be conformant even if non-ideal. it may be improved at some later time.
2011-02-12initial check-in, version 0.5.0v0.5.0Rich Felker-0/+9