Age | Commit message (Collapse) | Author | Lines |
|
the old __cp_cancel code path loaded the address of __cancel from the
GOT using the $gp register, which happened to be set to point to the
correct GOT by the calling C function, but there is no ABI requirement
that this happen. instead, go the roundabout way and compute the
address of __cancel via pc-relative and gp-relative addressing
starting with a fake return address generated by a bal instruction,
which is the same trick crt1 uses to bootstrap.
|
|
not only is pthread_kill expensive in this case; it also breaks
testing under qemu app-level emulation.
|
|
this file's .data section was not aligned, and just happened to get
the correct alignment with past builds. it's likely that the move of
atomic.s from arch/arm/src to src/thread/arm caused the change in
alignment, which broke the atomic and thread-pointer access fragments
on actual armv5 hardware.
|
|
|
|
all such arch-specific translation units are being moved to
appropriate arch dirs under the main src tree.
|
|
this is possible with the new build system that allows src/*/$(ARCH)/*
files which do not shadow a file in the parent directory, and yields a
more logical organization. eventually it will be possible to remove
arch/*/src from the build system.
|
|
sh needs runtime-selected atomic backends since there are a number of
supported models that use non-forwards-compatible (non-smp-compatible)
atomic mechanisms. previously, the code paths for this were highly
inefficient since they involved C function calls with multiple
branches in the callee and heavy spills in the caller. the new code
performs calls the runtime-selected asm fragment from inline asm with
extremely minimal clobbers, rather than using a function call.
for the sh4a case where the atomic mechanism is known and there is no
forward-compatibility issue, the movli.l and movco.l instructions are
provided as a_ll and a_sc, allowing the new shared atomic.h to
generate efficient inline versions of all the basic atomic operations
without needing a cas loop.
|
|
this was only a tiny optimization, and static-linked binaries should
not be calling __tls_get_addr anyway since the linker is supposed to
perform relaxation, resulting in use of the local-exec TLS model.
|
|
this is the first and simplest stage of removal of the SHARED macro,
which will eventually allow libc.a and libc.so to be produced from the
same object files.
the original motivation for these #ifdefs which are now being removed
was to allow building a static-only libc using a compiler that does
not support visibility. however, SHARED was the wrong condition to
test for this anyway; various assembly-language sources refer to
hidden symbols and declare them with the .hidden directive, making it
wrong to define the referenced symbols as non-hidden. if there is a
need in the future to build libc using compilers that lack visibility,
support could be moved to the build system or perhaps the __PIC__
macro could be checked instead of SHARED.
|
|
these files are all accepted as legacy arm syntax when producing arm
code, but legacy syntax cannot be used for producing thumb2 with
access to the full ISA. even after switching to UAL, some asm source
files contain instructions which are not valid in thumb mode, so these
will need to be addressed separately.
|
|
the idea of the three-instruction sequence being removed was to be
able to return to thumb code when used on armv4t+ from a thumb caller,
but also to be able to run on armv4 without the bx instruction
available (in which case the low bit of lr would always be 0).
however, without compiler support for generating such a sequence from
C code, which does not exist and which there is unlikely to be
interest in implementing, there is little point in having it in the
asm, and it would likely be easier to add pre-armv4t support via
enhanced linker handling of R_ARM_V4BX than at the compiler level.
removing this code simplifies adding support for building libc in
thumb2-only form (for cortex-m).
|
|
previously, only archs that needed to do stack cleanup defined a
__cp_cancel label for acting on cancellation in their syscall asm, and
a default definition was provided by a weak alias to __cancel, the C
function. this resulted in wrong codegen for arm on gcc versions
affected by pr 68178 and possibly similar issues (like pr 66609) on
other archs, and also created an inconsistency where the __cp_begin
and __cp_end labels were treated as const data but __cp_cancel was
treated as a function. this in turn caused incorrect code generation
on archs where function pointers point to function descriptors rather
than code (for now, only sh/fdpic).
|
|
using the actual mcontext_t definition rather than an overlaid pointer
array both improves correctness/readability and eliminates some ugly
hacks for archs with 64-bit registers bit 32-bit program counter.
also fix UB due to comparison of pointers not in a common array
object.
|
|
POSIX requires pthread_join to synchronize memory on success. The
futex wait inside __timedwait_cp cannot handle this because it's not
called in all cases. Also, in the case of a spurious wake, tid can
become zero between the wake and when the joining thread checks it.
|
|
clone calls back to a function pointer provided by the caller, which
will actually be a pointer to a function descriptor on fdpic. the
obvious solution is to have a separate version of clone for fdpic, but
I have taken a simpler approach to go around the problem. instead of
calling the pointed-to function from asm, a direct call is made to an
internal C function which then calls the pointed-to function. this
lets the C compiler generate the appropriate calling convention for an
indirect call with no need for ABI-specific assembly.
|
|
the TLS ABI spec for mips, powerpc, and some other (presently
unsupported) RISC archs has the return value of __tls_get_addr offset
by +0x8000 and the result of DTPOFF relocations offset by -0x8000. I
had previously assumed this part of the ABI was actually just an
implementation detail, since the adjustments cancel out. however, when
the local dynamic model is used for accessing TLS that's known to be
in the same DSO, either of the following may happen:
1. the -0x8000 offset may already be applied to the argument structure
passed to __tls_get_addr at ld time, without any opportunity for
runtime relocations.
2. __tls_get_addr may be used with a zero offset argument to obtain a
base address for the module's TLS, to which the caller then applies
immediate offsets for individual objects accessed using the local
dynamic model. since the immediate offsets have the -0x8000 adjustment
applied to them, the base address they use needs to include the
+0x8000 offset.
it would be possible, but more complex, to store the pointers in the
dtv[] array with the +0x8000 offset pre-applied, to avoid the runtime
cost of adding 0x8000 on each call to __tls_get_addr. this change
could be made later if measurements show that it would help.
|
|
linux kernel commit 46e12c07b3b9603c60fc1d421ff18618241cb081 caused
the mips syscall mechanism to fail with EFAULT when the userspace
stack pointer is invalid, breaking __unmapself used for detached
thread exit. the workaround is to set $sp to a known-valid, readable
address, and the simplest one to obtain is the address of the current
function, which is available (per o32 calling convention) in $25.
|
|
this error simply indicated a system without memory protection (NOMMU)
and should not cause failure in the caller.
|
|
nominally the low bits of the trap number on sh are the number of
syscall arguments, but they have never been used by the kernel, and
some code making syscalls does not even know the number of arguments
and needs to pass an arbitrary high number anyway.
sh3/sh4 traditionally used the trap range 16-31 for syscalls, but part
of this range overlapped with hardware exceptions/interrupts on sh2
hardware, so an incompatible range 32-47 was chosen for sh2.
using trap number 31 everywhere, since it's in the existing sh3/sh4
range and does not conflict with sh2 hardware, is a proposed
unification of the kernel syscall convention that will allow binaries
to be shared between sh2 and sh3/sh4. if this is not accepted into the
kernel, we can refit the sh2 target with runtime selection mechanisms
for the trap number, but doing so would be invasive and would entail
non-trivial overhead.
|
|
due to the way the interrupt and syscall trap mechanism works,
userspace on sh2 must never set the stack pointer to an invalid value.
thus, the approach used on most archs, where __unmapself executes with
no stack for the interval between SYS_munmap and SYS_exit, is not
viable on sh2.
in order not to pessimize sh3/sh4, the sh asm version of __unmapself
is not removed. instead it's renamed and redirected through code that
calls either the generic (safe) __unmapself or the sh3/sh4 asm,
depending on compile-time and run-time conditions.
|
|
the sh2 target is being considered an ISA subset of sh3/sh4, in the
sense that binaries built for sh2 are intended to be usable on later
cpu models/kernels with mmu support. so rather than hard-coding
sh2-specific atomics, the runtime atomic selection mechanisms that was
already in place has been extended to add sh2 atomics.
at this time, the sh2 atomics are not SMP-compatible; since the ISA
lacks actual atomic operations, the new code instead masks interrupts
for the duration of the atomic operation, producing an atomic result
on single-core. this is only possible because the kernel/hardware does
not impose protections against userspace doing so. additional changes
will be needed to support future SMP systems.
care has been taken to avoid producing significant additional code
size in the case where it's known at compile-time that the target is
not sh2 and does not need sh2-specific code.
|
|
functions which open in-memory FILE stream variants all shared a tail
with __fdopen, adding the FILE structure to stdio's open file list.
replacing this common tail with a function call reduces code size and
duplication of logic. the list is also partially encapsulated now.
function signatures were chosen to facilitate tail call optimization
and reduce the need for additional accessor functions.
with these changes, static linked programs that do not use stdio no
longer have an open file list at all.
|
|
this can be used to put off writing an asm version of __unmapself for
new archs, or as a permanent solution on archs where it's not
practical or even possible to run momentarily with no stack.
the concept here is simple: the caller takes a lock on a global shared
stack and uses it to make the munmap and exit syscalls. the only trick
is unlocking, which must be done after the thread exits, and this is
achieved by using the set_tid_address syscall to have the kernel zero
and futex-wake the lock word as part of the exit syscall.
|
|
otherwise disassemblers treat it as data.
|
|
the code being removed used atomics to track whether any threads might
be using a locale other than the current global locale, and whether
any threads might have abstract 8-bit (non-UTF-8) LC_CTYPE active, a
feature which was never committed (still pending). the motivations
were to support early execution prior to setup of the thread pointer,
to partially support systems (ancient kernels) where thread pointer
setup is not possible, and to avoid high performance cost on archs
where accessing the thread pointer may be very slow.
since commit 19a1fe670acb3ab9ead0fe31859ca7d4fe40dd54, the thread
pointer is always available, so these hacks are no longer needed.
removing them greatly simplifies the affected code.
|
|
commit f630df09b1fd954eda16e2f779da0b5ecc9d80d3 added logic to handle
the case where __set_thread_area is called more than once by reusing
the GDT slot already in the %gs register, and only setting up a new
GDT slot when %gs is zero. this created a hidden assumption that %gs
is zero when a new process image starts, which is true in practice on
Linux, but does not seem to be documented ABI, and fails to hold under
qemu app-level emulation.
while it would in theory be possible to zero %gs in the entry point
code, this code is shared between static and dynamic binaries, and
dynamic binaries must not clobber the value of %gs already setup by
the dynamic linker.
the alternative solution implemented in this commit simply uses global
data to store the GDT index that's selected. __set_thread_area should
only be called in the initial thread anyway (subsequent threads get
their thread pointer setup by __clone), but even if it were called by
another thread, it would simply read and write back the same GDT index
that was already assigned to the initial thread, and thus (in the x86
memory model) there is no data race.
|
|
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.
in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.
|
|
the kernel does not properly clear the upper bits of the syscall
argument, so we have to do it before the syscall.
|
|
use CAS instead of swap since it's lighter for most archs, and keep
EBUSY in the lock value so that the old value obtained by CAS can be
used directly as the return value for pthread_spin_trylock.
|
|
|
|
the leak was found by static analysis (reported by Alexander Monakov),
not tested/observed, but seems to have occured both when failing due
to O_EXCL, and in a race condition with O_CREAT but not O_EXCL where a
semaphore by the same name was created concurrently.
|
|
this fixes truncation of error messages containing long pathnames or
symbol names.
the dlerror state was previously required by POSIX to be global. the
resolution of bug 97 relaxed the requirements to allow thread-safe
implementations of dlerror with thread-local state and message buffer.
|
|
even hidden functions need @PLT symbol references; otherwise an
absolute address is produced instead of a PC-relative one.
|
|
this caused the dynamic linker/startup code to abort when r0 happened
to contain a negative value.
|
|
previously, the dynamic tlsdesc lookup functions and the i386
special-ABI ___tls_get_addr (3 underscores) function called
__tls_get_addr when the slot they wanted was not already setup;
__tls_get_addr would then in turn also see that it's not setup and
call __tls_get_new.
calling __tls_get_new directly is both more efficient and avoids the
issue of calling a non-hidden (public API/ABI) function from asm.
for the special i386 function, a weak reference to __tls_get_new is
used since this function is not defined when static linking (the code
path that needs it is unreachable in static-linked programs).
|
|
applying the attribute to a weak_alias macro was a hack. instead use a
separate declaration to apply the visibility, and consolidate
declarations together to avoid having visibility mess all over the
file.
|
|
|
|
in a few places, non-hidden symbols were referenced from asm in ways
that assumed ld-time binding. while these is no semantic reason these
symbols need to be hidden, fixing the references without making them
hidden was going to be ugly, and hidden reduces some bloat anyway.
in the asm files, .global/.hidden directives have been moved to the
top to unclutter the actual code.
|
|
at the point of call it was declared hidden, but the definition was
not hidden. for some toolchains this inconsistency produced textrels
without ld-time binding.
|
|
since 1.1.0, musl has nominally required a thread pointer to be setup.
most of the remaining code that was checking for its availability was
doing so for the sake of being usable by the dynamic linker. as of
commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer
necessary; the thread pointer is now valid before any libc code
(outside of dynamic linker bootstrap functions) runs.
this commit essentially concludes "phase 3" of the "transition path
for removing lazy init of thread pointer" project that began during
the 1.1.0 release cycle.
|
|
previously a new GDT slot was requested, even if one had already been
obtained by a previous call. instead extract the old slot number from
GS and reuse it if it was already set. the formula (GS-3)/8 for the
slot number automatically yields -1 (request for new slot) if GS is
zero (unset).
|
|
commit f08ab9e61a147630497198fe3239149275c0a3f4 introduced these
accidentally as remnants of some work I tried that did not work out.
|
|
|
|
this global lock allows certain unlock-type primitives to exclude
mmap/munmap operations which could change the identity of virtual
addresses while references to them still exist.
the original design mistakenly assumed mmap/munmap would conversely
need to exclude the same operations which exclude mmap/munmap, so the
vmlock was implemented as a sort of 'symmetric recursive rwlock'. this
turned out to be unnecessary.
commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the
interval during which mmap/munmap held their side of the lock, but
left the inappropriate lock design and some inefficiency.
the new design uses a separate function, __vm_wait, which does not
hold any lock itself and only waits for lock users which were already
present when it was called to release the lock. this is sufficient
because of the way operations that need to be excluded are sequenced:
the "unlock-type" operations using the vmlock need only block
mmap/munmap operations that are precipitated by (and thus sequenced
after) the atomic-unlock they perform while holding the vmlock.
this allows for a spectacular lack of synchronization in the __vm_wait
function itself.
|
|
as a result of commit 12e1e324683a1d381b7f15dd36c99b37dd44d940, kernel
processing of the robust list is only needed for process-shared
mutexes. previously the first attempt to lock any owner-tracked mutex
resulted in robust list initialization and a set_robust_list syscall.
this is no longer necessary, and since the kernel's record of the
robust list must now be cleared at thread exit time for detached
threads, optimizing it out is more worthwhile than before too.
|
|
the robust list head lies in the thread structure, which is unmapped
before exit for detached threads. this leaves the kernel unable to
process the exiting thread's robust list, and with a dangling pointer
which may happen to point to new unrelated data at the time the kernel
processes it.
userspace processing of the robust list was already needed for
non-pshared robust mutexes in order to perform private futex wakes
rather than the shared ones the kernel would do, but it was
conditional on linking pthread_mutexattr_setrobust and did not bother
processing the pshared mutexes in the list, which requires additional
logic for the robust list pending slot in case pthread_exit is
interrupted by asynchronous process termination.
the new robust list processing code is linked unconditionally (inlined
in pthread_exit), handles both private and shared mutexes, and also
removes the kernel's reference to the robust list before unmapping and
exit if the exiting thread is detached.
|
|
previously the implementation-internal signal used for multithreaded
set*id operations was left unblocked during handling of the
cancellation signal. however, on some archs, signal contexts are huge
(up to 5k) and the possibility of nested signal handlers drastically
increases the minimum stack requirement. since the cancellation signal
handler will do its job and return in bounded time before possibly
passing execution to application code, there is no need to allow other
signals to interrupt it.
|
|
This adds complete aarch64 target support including bigendian subarch.
Some of the long double math functions are known to be broken otherwise
interfaces should be fully functional, but at this point consider this
port experimental.
Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
|
|
due to a logic error in the use of masked cancellation mode,
pthread_cond_wait did not honor PTHREAD_CANCEL_DISABLE but instead
failed with ECANCELED when cancellation was pending.
|
|
|