|Age||Commit message (Collapse)||Author||Lines|
commit c8b49b2fbc7faa8bf065220f11963d76c8a2eb93 introduced code that
checked bestsym to determine whether a matching symbol was found, but
bestsym is uninitialized if not. instead use best, consistent with use
in the rest of the function.
simplified from bug report and patch by Cheng Liu.
after commit a48ccc159a5fa061a18419296100ee48a1cd6cc9 removed the use
of _Noreturn on the stage3_func type (which only worked due to it
being defined to the "GNU C" attribute in C99 mode), GCC could no
longer assume that the ends of __dls2 and __dls2b are unreachable, and
produced a warning that a function marked _Noreturn returns.
also, since commit 4390383b32250a941ec616e8bff6f568a801b1c0, the
_Noreturn declaration for __libc_start_main in crt1/rcrt1 has been not
only inconsistent with the definition, but wrong. formally,
__libc_start_main does return, via a (hopefully) tail call to a helper
function after the barrier. incorrect usage of _Noreturn in the
declaration was probably formal UB.
the _Noreturn specifiers were not useful in any of these places, so
remove them all. now, the only remaining usage of _Noreturn is in
public interfaces where _Noreturn is part of their contract.
currently the bfd linker does not seem to create tls segments where
p_vaddr%p_align != 0, but this is valid in ELF and then the runtime
computed tls offset must satisfy
offset%p_align == (base+p_vaddr)%p_align
and in case of local exec tls (main executable) the smallest such
offset must be used (otherwise it is incompatible with the offset
computed by the static linker). the !TLS_ABOVE_TP case is handled
correctly (the offset is negative then in the formula).
the ldso code for TLS_ABOVE_TP is changed so the static tls offset
of each module satisfies the formula.
tls_offset should always point to the end of the allocated static tls
area, but this was not handled correctly on "tls variant 1" targets
in the dynamic linker:
after application tls was allocated, tls_offset was aligned up,
potentially wasting tls space. (alignment may be needed at the
begining of the tls area, not at the end, but that will be fixed
separately as it is unlikely to affect real binaries.)
when static tls was allocated for a shared library, tls_offset was
only updated with the size of the tls segment which does not include
alignment gaps, which can easily happen if the tls size update for
one library leaves tls_offset misaligned for the next one. this can
cause oob access in __copy_tls or arbitrary breakage at tls access.
(the issue was observed on aarch64 with rust binaries)
maintainer's note: commit 9d44b6460ab603487dab4d916342d9ba4467e6b9
removed their use.
this is the first part of a series of patches intended to make
__syscall fully self-contained in the object file produced using
syscall.h, which will make it possible for crt1 code to perform
the (confusingly named) i386 __vsyscall mechanism, which this commit
removes, was introduced before the presence of a valid thread pointer
was mandatory; back then the thread pointer was setup lazily only if
threads were used. the intent was to be able to perform syscalls using
the kernel's fast entry point in the VDSO, which can use the sysenter
(Intel) or syscall (AMD) instruction instead of int $128, but without
inlining an access to the __syscall global at the point of each
syscall, which would incur a significant size cost from PIC setup
everywhere. the mechanism also shuffled registers/calling convention
around to avoid spills of call-saved registers, and to avoid
allocating ebx or ebp via asm constraints, since there are plenty of
broken-but-supported compiler versions which are incapable of
allocating ebx with -fPIC or ebp with -fno-omit-frame-pointer.
the new mechanism preserves the properties of avoiding spills and
avoiding allocation of ebx/ebp in constraints, but does it inline,
using some fairly simple register shuffling, and uses a field of the
thread structure rather than global data for the vdso-provided syscall
for now, the external __syscall function is refactored not to use the
old __vsyscall so it can be kept, but the intent is to remove it too.
this affected the error path where dlopen successfully found and
loaded the requested dso and all its dependencies, but failed to
resolve one or more relocations, causing the operation to fail after
storage for the ctor queue was allocated.
commit 188759bbee057aa94db2bbb7cf7f5855f3b9ab53 wrongly put the free
for the ctor_queue array in the error path inside a loop over each
loaded dso that needed to be backed-out, rather than just doing it
once. in addition, the exit path also observed the ctor_queue pointer
still being nonzero, and would attempt to call ctors on the backed-out
dsos unless the double-free crashed the process first.
together with the previous two commits, this completes restoration of
the property that dynamic-linked apps with no external deps and no tls
have no failure paths before entry.
neither has or can have any dependencies, but since commit
403555690775f7c8806372644f543518e6664e3b, gratuitous zero-length deps
arrays were being allocated for them. use a dummy array instead.
traditionally, we've provided a guarantee that dynamic-linked
applications with no external dependencies (nothing but libc) and no
thread-local storage have no failure paths before the entry point.
normally, thanks to reclaim_gaps, such a malloc will not require a
syscall anyway, but if segment alignment is unlucky, it might. use a
builtin array for this common special case.
in the case where malloc is being replaced, it's not valid to call
malloc between final relocations and main app's crt1 entry point; on
fdpic archs the main app's entry point will not yet have performed the
self-fixups necessary to call its code.
to fix, reorder queue_ctors before final relocations. an alternative
solution would be doing the allocation from __libc_start_init, after
the entry point but before any ctors run. this is less desirable,
since it would leave a call to malloc that might be provided by the
application happening at startup when doing so can be easily avoided.
previously, going way back, there was simply no synchronization here.
a call to exit concurrent with ctor execution from dlopen could cause
a dtor to execute concurrently with its corresponding ctor, or could
cause dtors for newly-constructed libraries to be skipped.
introduce a shutting_down state that blocks further ctor execution,
producing the quiescence the dtor execution loop needs to ensure any
kind of consistency, and that blocks further calls to dlopen so that a
call into dlopen from a dtor cannot deadlock.
better approaches to some of this may be possible, but the changes
here at least make things safe.
previously, shared library constructors at program start and dlopen
time were executed in reverse load order. some libraries, however,
rely on a depth-first dependency order, which most other dynamic
linker implementations provide. this is a much more reasonable, less
arbitrary order, and it turns out to have much better properties with
regard to how slow-running ctors affect multi-threaded programs, and
how recursive dlopen behaves.
this commit builds on previous work tracking direct dependencies of
each dso (commit 403555690775f7c8806372644f543518e6664e3b), and
performs a topological sort on the dependency graph at load time while
the main ldso lock is held and before success is committed, producing
a queue of constructors needed by the newly-loaded dso (or main
application). in the case of circular dependencies, the dependency
chain is simply broken at points where it becomes circular.
when the ctor queue is run, the init_fini_lock is held only for
iteration purposes; it's released during execution of each ctor, so
that arbitrarily-long-running application code no longer runs with a
lock held in the caller. this prevents a dlopen with slow ctors in one
thread from arbitrarily delaying other threads that call dlopen.
fully-independent ctors can run concurrently; when multiple threads
call dlopen with a shared dependency, one will end up executing the
ctor while the other waits on a condvar for it to finish.
another corner case improved by these changes is recursive dlopen
(call from a ctor). previously, recursive calls to dlopen could cause
a ctor for a library to be executed before the ctor for its
dependency, even when there was no relation between the calling
library and the library it was loading, simply due to the naive
reverse-load-order traversal. now, we can guarantee that recursive
dlopen in non-circular-dependency usage preserves the desired ctor
execution order properties, and that even in circular usage, at worst
the libraries whose ctors call dlopen will fail to have completed
construction when ctors that depend on them run.
init_fini_lock is changed to a normal, non-recursive mutex, since it
is no longer held while calling back into application code.
this makes calling dlsym on the main app more consistent with the
global symbol table (load order), and is a prerequisite for
dependency-order ctor execution to work correctly with LD_PRELOAD.
commit 403555690775f7c8806372644f543518e6664e3b introduced runtime
realloc of an array that may have been allocated before symbols were
resolved outside of libc, which is invalid if the allocator has been
replaced. track this condition and manually copy if needed.
dlsym with an explicit handle is specified to use "dependency order",
a breadth-first search rooted at the argument. this has always been
implemented by iterating a flattened dependency list built at dlopen
time. however, the logic for building this list was completely wrong
except in trivial cases; it simply used the list of libraries loaded
since a given library, and their direct dependencies, as that
library's dependencies, which could result in misordering, wrongful
omission of deep dependencies from the search, and wrongful inclusion
of unrelated libraries in the search.
further, libraries did not have any recorded list of resolved
dependencies until they were explicitly dlopened, meaning that
DT_NEEDED entries had to be resolved again whenever a library
participated as a dependency of more than one dlopened library.
with this overhaul, the resolved direct dependency list of each
library is always recorded when it is first loaded, and can be
extended to a full flattened breadth-first search list if dlopen is
called on the library. the extension is performed using the direct
dependency list as a queue and appending copies of the direct
dependency list of each dependency in the queue, excluding duplicates,
until the end of the queue is reached. the direct deps remain
available for future use as the initial subarray of the full deps
first-load logic in dlopen is updated to match these changes, and
code introduced in commit 9d44b6460ab603487dab4d916342d9ba4467e6b9
wrongly attempted to read past the end of the currently-installed dtv
to determine if a dso provides new, not-already-installed tls. this
logic was probably leftover from an earlier draft of the code that
wrongly installed the new dtv before populating it.
it would work if we instead queried the new, not-yet-installed dtv,
but instead, replace the incorrect check with a simple range check
against old_cnt. this also catches modules that have no tls at all
with a single condition.
code introduced in commit 9d44b6460ab603487dab4d916342d9ba4467e6b9
wrongly assumed the dso list tail was the right place to find new dtv
storage. however, this is only true if the last-loaded dependency has
tls. the correct place to get it is the dso corresponding to the tls
module list tail. introduce a container_of macro to get it, and use
ultimately, dynamic tls allocation should be refactored so that this
is not an issue. there is no reason to be allocating new dtv space at
each load_library; instead it could happen after all new libraries
have been loaded but before they are committed. such changes may be
made later, but this commit fixes the present regression.
the motivation for this change is twofold. first, it gets the fallback
logic out of the dynamic linker, improving code readability and
organization. second, it provides application code that wants to use
the membarrier syscall, which depends on preregistration of intent
before the process becomes multithreaded unless unbounded latency is
acceptable, with a symbol that, when linked, ensures that this
commit 9d44b6460ab603487dab4d916342d9ba4467e6b9 inadvertently
contained leftover logic from a previous approach to the fallback
signaling loop. it had no adverse effect, since j was always nonzero
if the loop body was reachable, but it makes no sense to be there with
the current approach to avoid signaling self.
previously, dynamic loading of new libraries with thread-local storage
allocated the storage needed for all existing threads at load-time,
precluding late failure that can't be handled, but left installation
in existing threads to take place lazily on first access. this imposed
an additional memory access and branch on every dynamic tls access,
and imposed a requirement, which was not actually met, that the
dynamic tlsdesc asm functions preserve all call-clobbered registers
before calling C code to to install new dynamic tls on first access.
the x86[_64] versions of this code wrongly omitted saving and
restoring of fpu/vector registers, assuming the compiler would not
generate anything using them in the called C code. the arm and aarch64
versions saved known existing registers, but failed to be future-proof
against expansion of the register file.
now that we track live threads in a list, it's possible to install the
new dynamic tls for each thread at dlopen time. for the most part,
synchronization is not needed, because if a thread has not
synchronized with completion of the dlopen, there is no way it can
meaningfully request access to a slot past the end of the old dtv,
which remains valid for accessing slots which already existed.
however, it is necessary to ensure that, if a thread sees its new dtv
pointer, it sees correct pointers in each of the slots that existed
prior to the dlopen. my understanding is that, on most real-world
coherency architectures including all the ones we presently support, a
built-in consume order guarantees this; however, don't rely on that.
instead, the SYS_membarrier syscall is used to ensure that all threads
see the stores to the slots of their new dtv prior to the installation
of the new dtv. if it is not supported, the same is implemented in
userspace via signals, using the same mechanism as __synccall.
the __tls_get_addr function, variants, and dynamic tlsdesc asm
functions are all updated to remove the fallback paths for claiming
new dynamic tls, and are now all branch-free.
commit a603a75a72bb469c6be4963ed1b55fabe675fe15 removed attribute
const from __errno_location and pthread_self, and the same reasoning
forced arch definitions of __pthread_self to use volatile asm,
significantly impacting code generation and imposing manual caching of
pointers where the impact might be noticable.
reorder the thread pointer setup and place it across a strong barrier
(symbolic function lookup) so that there is no assumed ordering
between the initialization and the accesses to the thread pointer in
the placement triggered -Wmisleading-indentation warnings if enabled,
and was gratuitously confusing to anyone reading the code.
commit 6ba5517a460c6c438f64d69464fdfc3269a4c91a modified
__tls_get_addr to offset the address by +DTP_OFFSET (0x8000 on
powerpc, mips, etc.) and adjusted the result of DTPREL relocations by
-DTP_OFFSET to compensate, but missed changing the argument setup for
calls to __tls_get_addr from dlsym.
as explained in commit 6ba5517a460c6c438f64d69464fdfc3269a4c91a, some
archs use an offset (typicaly -0x8000) with their DTPOFF relocations,
which __tls_get_addr needs to invert. on affected archs, which lack
direct support for large immediates, this can cost multiple extra
instructions in the hot path. instead, incorporate the DTP_OFFSET into
the DTV entries. this means they are no longer valid pointers, so
store them as an array of uintptr_t rather than void *; this also
makes it easier to access slot 0 as a valid slot count.
commit e75b16cf93ebbc1ce758d3ea6b2923e8b2457c68 left behind cruft in
two places, __reset_tls and __tls_get_new, from back when it was
possible to have uninitialized gap slots indicated by a null pointer
in the DTV. since the concept of null pointer is no longer meaningful
with an offset applied, remove this cruft.
presently there are no archs with both TLSDESC and nonzero DTP_OFFSET,
but the dynamic TLSDESC relocation code is also updated to apply an
inverted offset to its offset field, so that the offset DTV would not
impose a runtime cost in TLSDESC resolver functions.
unlike other asm where the baseline ISA is used, these functions are
hot paths and use ISA-level specializations.
call-clobbered vfp registers are saved before calling __tls_get_new,
since there is no guarantee it won't use them. while setjmp/longjmp
have to use hwcap to decide whether to the fpu is in use, since
application code could be using vfp registers even if libc was
compiled as pure softfloat, __tls_get_new is part of libc and can be
assumed not to have access to vfp registers if tlsdesc.S does not.
thus it suffices just to check the predefined preprocessor macros. the
check for __ARM_PCS_VFP is redundant; !__SOFTFP__ must always be true
if the target ISA level includes fpu instructions/registers.
this facilitates building software that assumes a large default stack
size without any patching to call pthread_setattr_default_np or
pthread_attr_setstacksize at each thread creation site, using just
normally the PT_GNU_STACK header is used only to reflect whether
executable stack is desired, but with GNU ld at least, passing
-Wl,-z,stack-size=N will set a size on the program header. with this
patch, that size will be incorporated into the default stack size
(subject to increase-only rule and DEFAULT_STACK_MAX limit).
both static and dynamic linking honor the program header. for dynamic
linking, all libraries loaded at program start, including preloaded
ones, are considered. dlopened libraries are not considered, for
several reasons. extra logic would be needed to defer processing until
the load of the new library is commited, synchronization woud be
needed since other threads may be running concurrently, and the
effectiveness woud be limited since the larger size would not apply to
threads that already existed at the time of dlopen. programs that will
dlopen code expecting a large stack need to declare the requirement
themselves, or pthread_setattr_default_np can be used.
commits leading up to this one have moved the vast majority of
libc-internal interface declarations to appropriate internal headers,
allowing them to be type-checked and setting the stage to limit their
visibility. the ones that have not yet been moved are mostly
namespace-protected aliases for standard/public interfaces, which
exist to facilitate implementing plain C functions in terms of POSIX
functionality, or C or POSIX functionality in terms of extensions that
are not standardized. some don't quite fit this description, but are
"internally public" interfacs between subsystems of libc.
rather than create a number of newly-named headers to declare these
functions, and having to add explicit include directives for them to
every source file where they're needed, I have introduced a method of
wrapping the corresponding public headers.
parallel to the public headers in $(srcdir)/include, we now have
wrappers in $(srcdir)/src/include that come earlier in the include
path order. they include the public header they're wrapping, then add
declarations for namespace-protected versions of the same interfaces
and any "internally public" interfaces for the subsystem they
along these lines, the wrapper for features.h is now responsible for
the definition of the hidden, weak, and weak_alias macros. this means
source files will no longer need to include any special headers to
access these features.
over time, it is my expectation that the scope of what is "internally
public" will expand, reducing the number of source files which need to
include *_impl.h and related headers down to those which are actually
implementing the corresponding subsystems, not just using them.
it's already included in all places where these are needed, and aside
from __tls_get_addr, they're all implementation internals.
eliminate gratuitous glue function for reporting the version, which
was probably leftover from the old dynamic linker design which lacked
a clear barrier for when/how it could access global data. put the
declaration for the data object that replaces it in libc.h where it
can be type checked.
this cleans up what had become widespread direct inline use of "GNU C"
style attributes directly in the source, and lowers the barrier to
increased use of hidden visibility, which will be useful to recovering
some of the efficiency lost when the protected visibility hack was
dropped in commit dc2f368e565c37728b0d620380b849c3a1ddd78f, especially
on archs where the PLT ABI is costly.
previously, this operation succeeded, and the relocation results
worked for access from new threads created after dlopen, but produced
invalid accesses (and possibly clobbered other memory) from threads
that already existed.
the way the check is written, it still permits dlopen of libraries
containing initial-exec references to static TLS (TLS in the main
program or in a dynamic library loaded at startup).
tls_id is one-based, whereas [static_]tls_cnt is a count, so
comparison for checking that a given tls_id is dynamic rather than
static needs to use strict inequality.
since slack space at the beginning and/or end of writable load maps is
donated to malloc, the application could obtain valid pointers in
these ranges which dladdr would erroneously identify as part of the
shared object whose mapping they came from.
instead of checking the queried address against the mapping base and
length, check it against the load segments from the program headers,
and only match the dso if it lies within the bounds of one of them.
as a shortcut, if the address does match the range of the mapping but
not any of the load segments, we know it cannot match any other dso
and can immediately return failure.
the early-exit condition for the symbol match loop on exact matches
caused dladdr to produce the first match for an exact match, but the
last match for an inexact match. in the interest of consistency,
require a strictly-closer match to replace an already-found one.
commit 8b8fb7f03721c42445f982582f462144ab60a1a0 added logic to prevent
matching a symbol with no recorded size (closest-match) when there is
an intervening symbol whose size was recorded, but it only worked when
the intervening symbol was encountered later in the search.
instead of rejecting symbols where addr falls outside their recorded
size during the closest-match search, accept them to find the true
closest-match, then reject such a result only once the search has
based on patch by Axel Siebenborn, with fixes discussed on the mailing
list after submission and and rebased around the UB fix in commit
avoid spurious symbol matches by dladdr beyond symbol size. for
symbols with a size recorded, only match if the queried address lies
within the address range determined by the symbol address and size.
for symbols with no size recorded, the old closest-match behavior is
kept, as long as there is no intervening symbol with a recorded size.
the case where no symbol is matched, but the address does lie within
the memory range of a shared object, is specified as success. fix the
return value and produce a valid (with null dli_sname and dli_saddr)
writable load segments can have size-in-memory larger than their size
in the ELF file, representing bss or equivalent. the initial partial
page has to be zero-filled, and additional anonymous pages have to be
mapped such that accesses don't failt with SIGBUS.
map_library skips redundant MAP_FIXED mapping of the initial
(lowest-address) segment when processing LOAD segments since it was
already mapped when reserving the virtual address range, but in doing
so, inadvertently also skipped the code to fill/map bss. typical
executable and library files have two or more LOAD segments, and the
first one is text/rodata (non-writable) and thus has no bss, but it is
syntactically valid for an ELF program/library to put its writable
segment first, or to have only one segment (everything writable). the
binutils bfd-based linker has been observed to create such programs in
the presence of unusual sections or linker scripts.
fix by moving only the mmap_fixed operation under the conditional
rather than skipping the remainder of the loop body. add a check to
avoid bss processing in the case where the segment is not writable;
this should not happen, but if it does, the change would be a crashing
regression without this check.
In TLS variant I the TLS is above TP (or above a fixed offset from TP)
but on some targets there is a reserved gap above TP before TLS starts.
This matters for the local-exec tls access model when the offsets of
TLS variables from the TP are hard coded by the linker into the
executable, so the libc must compute these offsets the same way as the
linker. The tls offset of the main module has to be
If there is no TLS in the main module then the gap can be ignored
since musl does not use it and the tls access models of shared
libraries are not affected.
The previous setup only worked if (tls_align & -GAP_ABOVE_TP) == 0
(i.e. TLS did not require large alignment) because the gap was
treated as a fixed offset from TP. Now the TP points at the end
of the pthread struct (which is aligned) and there is a gap above
it (which may also need alignment).
The fix required changing TP_ADJ and __pthread_self on affected
targets (aarch64, arm and sh) and in the tlsdesc asm the offset to
access the dtv changed too.
commit 618b18c78e33acfe54a4434e91aa57b8e171df89 removed the previous
detection and hardening since it was incorrect. commit
72141795d4edd17f88da192447395a48444afa10 already handled all that
remained for hardening the static-linked case. in the dynamic-linked
case, have the dynamic linker check whether malloc was replaced and
make that information available.
with these changes, the properties documented in commit
c9f415d7ea2dace5bf77f6518b6afc36bb7a5732 are restored: if calloc is
not provided, it will behave as malloc+memset, and any of the
memalign-family functions not provided will fail with ENOMEM.
the existing laddr function for fdpic cannot translate ELF virtual
addresses outside of the LOAD segments to runtime addresses because
the fdpic loadmap only covers the logically-mapped part. however the
whole point of reclaim_gaps is to recover the slack space up to the
page boundaries, so it needs to work with such addresses.
add a new laddr_pg function that accepts any address in the page range
for the LOAD segment by expanding the loadmap records out to page
boundaries. only use the new version for reclaim_gaps, so as not to
impact performance of other address lookups.
also, only use laddr_pg for the start address of a gap; the end
address lies one byte beyond the end, potentially in a different page
where it would get mapped differently. instead of mapping end, apply
the length (end-start) to the mapped value of start.
Split 'free' into unmap_chunk and bin_chunk, use the latter to introduce
__malloc_donate and use it in reclaim_gaps instead of calling 'free'.
in theory non-absolute origins can only arise when either the main
program is invoked by running ldso as a command (inherently non-suid)
or when dlopen was called with a relative pathname containing at least
one slash. such usage would be inherently insecure in an suid program
anyway, so the old behavior here does not seem to have been insecure.
harden against it anyway.
the rpath fixup code assumed any module's name field would contain at
least one slash, an invariant which is usually met but not in the case
of a main executable loaded from the current working directory by
running ldd or ldso as a command. it would be possible to make this
invariant always hold, but it has a higher runtime allocation cost and
does not seem useful elsewhere, so just patch things up in fixup_rpath
the Linux and FreeBSD man pages for dladdr document dli_fbase as the
"base address" of the library/module found. normally (e.g. AT_BASE)
the term "base" is used to denote the base address relative to which
p_vaddr addresses are interpreted; however in the case of dladdr's
Dl_info structure, existing implementations define it as the lowest
address of the mapping, which makes sense in the context of
determining which module's memory range the input address falls
since this is a nonstandard interface provided to mimic one provided
by other implementations, adjust it to match their behavior.