|Age||Commit message (Collapse)||Author||Lines|
this extends the brk/stack collision protection added to full malloc
in commit 276904c2f6bde3a31a24ebfa201482601d18b4f9 to also protect the
__simple_malloc function used in static-linked programs that don't
reference the free function.
it also extends support for using mmap when brk fails, which full
malloc got in commit 5446303328adf4b4e36d9fba21848e6feb55fab4, to
since __simple_malloc may expand the heap by arbitrarily large
increments, the stack collision detection is enhanced to detect
interval overlap rather than just proximity of a single address to the
stack. code size is increased a bit, but this is partly offset by the
sharing of code between the two malloc implementations, which due to
linking semantics, both get linked in a program that needs the full
malloc with realloc/free support.
commit 58165923890865a6ac042fafce13f440ee986fd9 added these optional
cancellation points on the basis that cancellable stdio could be
useful, to unblock threads stuck on stdio operations that will never
complete. however, the only way to ensure that cancellation can
achieve this is to violate the rules for side effects when
cancellation is acted upon, discarding knowledge of any partial data
transfer already completed. our implementation exhibited this behavior
and was thus non-conforming.
in addition to improving correctness, removing these cancellation
points moderately reduces code size, and should significantly improve
performance on i386, where sysenter/syscall instructions can be used
instead of "int $128" for non-cancellable syscalls.
the old idiom, f->mode |= f->mode+1, was adapted from the idiom for
setting byte orientation, f->mode |= f->mode-1, but the adaptation was
incorrect. unless the stream was alreasdy set byte-oriented, this code
incremented f->mode each time it was executed, which would eventually
lead to overflow. it could be fixed by changing it to f->mode |= 1,
but upcoming changes will require slightly more work at the time of
wide orientation, so it makes sense to just call fwide. as an
optimization in the single-character functions, fwide is only called
if the stream is not already wide-oriented.
this is undefined, but supported in our implementation of the normal
printf, so for consistency the wide variant should support it too.
this can be used to put off writing an asm version of __unmapself for
new archs, or as a permanent solution on archs where it's not
practical or even possible to run momentarily with no stack.
the concept here is simple: the caller takes a lock on a global shared
stack and uses it to make the munmap and exit syscalls. the only trick
is unlocking, which must be done after the thread exits, and this is
achieved by using the set_tid_address syscall to have the kernel zero
and futex-wake the lock word as part of the exit syscall.
the linux/nommu fdpic ELF loader sets up the brk range to overlap
entirely with the main thread's stack (but growing from opposite
ends), so that the resulting failure mode for malloc is not to return
a null pointer but to start returning pointers to memory that overlaps
with the caller's stack. needless to say this extremely dangerous and
makes brk unusable.
since it's non-trivial to detect execution environments that might be
affected by this kernel bug, and since the severity of the bug makes
any sort of detection that might yield false-negatives unsafe, we
instead check the proximity of the brk to the stack pointer each time
the brk is to be expanded. both the main thread's stack (where the
real known risk lies) and the calling thread's stack are checked. an
arbitrary gap distance of 8 MB is imposed, chosen to be larger than
linux default main-thread stack reservation sizes and larger than any
reasonable stack configuration on nommu.
the effeciveness of this patch relies on an assumption that the amount
by which the brk is being grown is smaller than the gap limit, which
is always true for malloc's use of brk. reliance on this assumption is
why the check is being done in malloc-specific code and not in __brk.
for several pwd/grp functions, the only way the caller can distinguish
between a successful negative result ("no such user/group") and an
internal error is by clearing errno before the call and checking errno
afterwards. the nscd backend support code correctly simulated a
not-found response on systems where such a backend is not running, but
failed to restore errno.
this commit also fixed an outdated/incorrect comment.
the arm atomics/TLS runtime selection code is called from
__set_thread_area and depends on having libc.auxv and __hwcap
available. commit 71f099cb7db821c51d8f39dfac622c61e54d794c moved the
first call to __set_thread_area to the top of dynamic linking stage 3,
before this data is made available, causing the runtime detection code
to always see __hwcap as zero and thereby select the atomics/TLS
implementations based on kuser helper.
upcoming work on superh will use similar runtime detection.
ideally this early-init code should be cleanly refactored and shared
between the dynamic linker and static-linked startup.
unless/until the byte-based C locale is implemented, defining
MB_CUR_MAX to 1 in the C locale is wrong. no internal code currently
uses the MB_CUR_MAX macro, but having it defined inconsistently is
error-prone. applications get the value from stdlib.h and were
presumably internal code (ungetwc and fputwc) was written assuming a
macro implementation existed; otherwise use of isascii is just a
aside from being invalid, the early check only optimized the error
case, and likely pessimized the common case by separating the
two branches on isascii(c) at opposite ends of the function.
commit 68630b55c0c7219fe9df70dc28ffbf9efc8021d8 made the new locale to
be assigned unconditonally resulting in crashes later on.
commit f3ddd173806fd5c60b3f034528ca24542aecc5b9 inadvertently removed
the early check for "none" type relocations, causing the address
dso->base+0 to be dereferenced to obtain an addend. shared libraries,
(including libc.so) and PIE executables were unaffected, since their
base addresses are the actual address of their mappings and are
readable. non-PIE main executables, however, have a base address of 0
because their load addresses are absolute and not offset at load time.
in practice none-type relocations do not arise with toolchains that
are in use except on mips, and on mips it's moderately rare for a
non-PIE executable to have a relocation table, since the mips-specific
got processing serves in its place for most purposes.
these functions were written to handle clearing eof status, but failed
to account for the __toread function's handling of eof. with this
patch applied, __toread still returns EOF when the file is in eof
status, so that read operations will fail, but it also sets up valid
buffer pointers for read mode, which are set to the end of the buffer
rather than the beginning in order to make the whole buffer available
minor changes to __uflow were needed since it's now possible to have
non-zero buffer pointers while in eof status. as made, these changes
remove a 'fast path' bypassing the function call to __toread, which
could be reintroduced with slightly different logic, but since
ordinary files have a syscall in f->read, optimizing the code path
does not seem worthwhile.
the __stdio_read function is also updated not to zero the read buffer
pointers on eof/error. while not necessary for correctness, this
change avoids the overhead of calling __toread in ungetc after
reaching eof, and it also reduces code size and increases consistency
with the fmemopen read operation which does not zero the pointers.
this frees applications which need to make temporary use of the C
locale (via uselocale) from the possibility that newlocale might fail.
the C.UTF-8 locale is also provided as a static locale. presently they
behave the same, but this may change in the future.
since the __setlocalecat function was removed, the filename
__setlocalecat.c no longer made sense.
previously, LC_MESSAGES was treated specially as the only category
which could be set to a locale name without a definition file, in
order to facilitate gettext message translations when no libc locale
was available. LC_NUMERIC was completely un-settable, and LC_CTYPE
stored a flag intended to be used for a possible future byte-based C
locale, instead of storing a __locale_map pointer like the other
this patch changes all categories to be represented by pointers to
__locale_map structures, and allows locale names without definition
files to be treated as valid locales with trivial definition when used
in any category. outwardly visible functional changes should be minor,
limited mainly to the strings read back from setlocale and the way
gettext handles translations in categories other than LC_MESSAGES.
various internal refactoring has also been performed, and improvements
in const correctness have been made.
this is part of a general program of removing direct use of atomics
where they are not necessary to meet correctness or performance needs,
but in this case it's also an optimization. only the global locale
needs synchronization; allocated locales referenced with locale_t
handles are immutable during their lifetimes, and using atomics to
initialize them increases their cost of setup.
commit f3ddd173806fd5c60b3f034528ca24542aecc5b9 introduced early
relocations and subsequent reprocessing as part of the dynamic linker
bootstrap overhaul, to allow use of arbitrary libc functions before
the main application and libraries are loaded, but only reprocessed
GOT/PLT relocation types.
commit c093e2e8201524db0d638920e76bcb6b1d925f3a added reprocessing of
non-GOT/PLT relocations to fix an actual regression that was observed
on powerpc, but only for RELA format tables with out-of-line addends.
REL table (inline addends at the relocation address) reprocessing is
trickier because the first relocation pass clobbers the addends.
this patch extends symbolic relocation reprocessing for libc/ldso to
support all relocation types, whether REL or RELA format tables are
used. it is believed not to alter behavior on any existing archs for
the current dynamic linker and libc code. the motivations for this
change are consistency and future-proofing. it ensures that behavior
does not differ depending on whether REL or RELA tables are used,
which could lead to undetected arch-specific bugs. it also ensures
that, if in the future code depending on additional relocation types
is added to libc.so, either at the source level or as part of the
compiler runtime that gets pulled in (for example, soft-float with TLS
for fenv), the new code will work properly.
the implementation concept is simple: stage 2 of the dynamic linker
counts the number of symbolic relocations in the libc/ldso REL table
and allocates a VLA to save their addends into; stage 3 then uses the
saved addends in place of the inline ones which were clobbered. for
stack safety, a hard limit (currently 4k) is imposed on the number of
such addends; this should be a couple orders of magnitude larger than
the actual need. this number is not a runtime variable that could
break fail-safety; it is constant for a given libc.so build.
this move eliminates a duplicate "by-hand" symbol lookup loop from the
stage-1 code and replaces it with a call to find_sym, which can be
used once we're in stage 2. it reduces the size of the stage 1 code,
which is helpful because stage 1 will become the crt start file for
static-PIE executables, and it will allow stage 3 to access stage 2's
automatic storage, which will be important in an upcoming commit.
otherwise disassemblers treat it as data.
the outer-loop approach made sense when we were also processing
DT_JMPREL, which might be in REL or RELA form, to avoid major code
duplication. commit 09db855b35709aa627d7055c57a98e1e471920ab removed
processing of DT_JMPREL, and in the remaining two tables, the format
(REL or RELA) is known by the name of the table. simply writing two
versions of the loop results in smaller and simpler code.
the DT_JMPREL relocation table necessarily consists entirely of
JMP_SLOT (REL_PLT in internal nomenclature) relocations, which are
symbolic; they cannot be resolved in stage 1, so there is no point in
the comment claimed that EUC/GBK/Big5 are not implemented, which has
been incorrect since commit 19b4a0a20efc6b9df98b6a43536ecdd628ba4643.
while not a requirement, it's common convention in other iconv
implementations to accept "CHAR" as an alias for nl_langinfo(CODESET),
meaning the encoding used for char strings in the current locale,
and also "" as an alternate form. supporting this is not costly and
this fixes a regression on powerpc that was introduced in commit
f3ddd173806fd5c60b3f034528ca24542aecc5b9. global data accesses on
powerpc seem to be using a translation-unit-local GOT filled via
R_PPC_ADDR32 relocations rather than R_PPC_GLOB_DAT. being a non-GOT
relocation type, these were not reprocessed after adding the main
application and its libraries to the chain, causing libc code not to
see copy relocations in the main program, and therefore to use the
pre-copy-relocation addresses for global data objects (like environ).
the motivation for the dynamic linker only reprocessing GOT/PLT
relocation types in stage 3 is that these types always have a zero
addend, making them safe to process again even if the storage for the
addend has been clobbered. other relocation types which can be used
for address constants in initialized data objects may have non-zero
addends which will be clobbered during the first pass of relocation
processing if they're stored inline (REL form) rather than out-of-line
powerpc generally uses only RELA, so this patch is sufficient to fix
the regression in practice, but is not fully general, and would not
suffice if an alternate toolchain generated REL for powerpc.
if setlocale has not been called, the current locale's messages_name
may be a null pointer. the code path where it's assumed to be non-null
was only reachable if bindtextdomain had already been called, which is
normally not done in programs which do not call setlocale, so the
omitted check went unnoticed.
patch from Void Linux, with description rewritten.
the code being removed used atomics to track whether any threads might
be using a locale other than the current global locale, and whether
any threads might have abstract 8-bit (non-UTF-8) LC_CTYPE active, a
feature which was never committed (still pending). the motivations
were to support early execution prior to setup of the thread pointer,
to partially support systems (ancient kernels) where thread pointer
setup is not possible, and to avoid high performance cost on archs
where accessing the thread pointer may be very slow.
since commit 19a1fe670acb3ab9ead0fe31859ca7d4fe40dd54, the thread
pointer is always available, so these hacks are no longer needed.
removing them greatly simplifies the affected code.
commit f630df09b1fd954eda16e2f779da0b5ecc9d80d3 added logic to handle
the case where __set_thread_area is called more than once by reusing
the GDT slot already in the %gs register, and only setting up a new
GDT slot when %gs is zero. this created a hidden assumption that %gs
is zero when a new process image starts, which is true in practice on
Linux, but does not seem to be documented ABI, and fails to hold under
qemu app-level emulation.
while it would in theory be possible to zero %gs in the entry point
code, this code is shared between static and dynamic binaries, and
dynamic binaries must not clobber the value of %gs already setup by
the dynamic linker.
the alternative solution implemented in this commit simply uses global
data to store the GDT index that's selected. __set_thread_area should
only be called in the initial thread anyway (subsequent threads get
their thread pointer setup by __clone), but even if it were called by
another thread, it would simply read and write back the same GDT index
that was already assigned to the initial thread, and thus (in the x86
memory model) there is no data race.
a null pointer is valid here and indicates that the current time
should be used. based on patch by Felix Janda, simplified.
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.
in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.
the 64-bit push reads not only the 32-bit return address but also the
first 32 signal mask bits. if any were nonzero, the return address
obtained will be invalid.
at some point storage of the return address should probably be moved
to follow the saved mask so that there's plenty room and the same code
can be used on x32 and regular x86_64, but for now I want a fix that
does not risk breaking x86_64, and this simple re-zeroing works.
the kernel does not properly clear the upper bits of the syscall
argument, so we have to do it before the syscall.
These cases were incorrect in C11 as described by
due to an incorrect return statement in this error case, the
previously blocked cancellation state was not restored and no result
was stored. this could lead to invalid (read) accesses in the caller
resulting in crashes or nonsensical result data in the event of memory
while the sh port is still experimental and subject to ABI
instability, this is not actually an application/libc boundary ABI
change. it only affects third-party APIs where jmp_buf is used in a
shared structure at the ABI boundary, because nothing anywhere near
the end of the jmp_buf object (which includes the oversized sigset_t)
is accessed by libc.
both glibc and uclibc have 15-slot jmp_buf for sh. presumably the
smaller version was used in musl because the slots for fpu status
register and thread pointer register (gbr) were incorrect and must not
be restored by longjmp, but the size should have been preserved, as
it's generally treated as a libc-agnostic ABI property for the arch,
and having extra slots free in case we ever need them for something is
at least some assembler versions do not accept the register name lr.
use the name x30 instead.
commit 646cb9a4a04e5ed78e2dd928bf9dc6e79202f609 switched sigsetjmp to
use the new hidden ___setjmp symbol for setjmp, but the nofpu variant
of setjmp.s was not updated to match.
both static and dynamic linked versions of the __copy_tls function
have a hidden assumption that the alignment of the beginning or end of
the memory passed is suitable for storing an array of pointers for the
dtv. pthread_create satisfies this requirement except when
libc.tls_size is misaligned, which cannot happen with dynamic linking
due to way update_tls_size computes the total size, but could happen
with static linking and odd-sized TLS.
commit dab441aea240f3b7c18a26d2ef51979ea36c301c, which made thread
pointer init mandatory for all programs, rendered this store obsolete
by removing the early-return path for static programs with no TLS.
this slightly reduces the code size cost of TLS/thread-pointer for
static linking since __init_tp can be inlined into its only caller and
removed. this is analogous to the handling of __init_libc in
__libc_start_main, where the function only has external linkage when
it needs to be called from the dynamic linker.
the implicit-operand form of fucomip is rejected by binutils 2.19 and
perhaps other versions still in use. writing both operands explicitly
fixes the issue. there is no change to the resulting output.
commit a732e80d33b4fd6f510f7cec4f5573ef5d89bc4e was the source of this
use CAS instead of swap since it's lighter for most archs, and keep
EBUSY in the lock value so that the old value obtained by CAS can be
used directly as the return value for pthread_spin_trylock.
the motivation for this change is that the extra declaration (with or
without visibility) using "struct _IO_FILE" instead of "FILE" seems to
trigger a bug in gcc 3.x where it considers the types mismatched.
however, this change also results in slightly better code and it is
valid because (1) these three objects are constant, and (2) applying
the & operator to any of them is invalid C, since they are not even
specified to be objects. thus it does not matter if the application
and libc see different addresses for them, as long as the (initial,
unchanging) value is seen the same by both.