AgeCommit message (Collapse)AuthorLines
2020-08-27deduplicate __pthread_self thread pointer adjustment out of each archRich Felker-65/+65
the adjustment made is entirely a function of TLS_ABOVE_TP and TP_OFFSET. aside from avoiding repetition of the TP_OFFSET value and arithmetic, this change makes pthread_arch.h independent of the definition of struct __pthread from pthread_impl.h. this in turn will allow inclusion of pthread_arch.h to be moved to the top of pthread_impl.h so that it can influence the definition of the structure. previously, arch files were very inconsistent about the type used for the thread pointer. this change unifies the new __get_tp interface to always use uintptr_t, which is the most correct when performing arithmetic that may involve addresses outside the actual pointed-to object (due to TP_OFFSET).
2020-08-24deduplicate TP_ADJ logic out of each arch, replace with TP_OFFSETRich Felker-21/+16
the only part of TP_ADJ that was not uniquely determined by TLS_ABOVE_TP was the 0x7000 adjustment used mainly on mips and powerpc variants.
2020-08-24report res_query failures, including nxdomain/nodata, via h_errnoRich Felker-1/+15
while it's not clearly documented anywhere, this is the historical behavior which some applications expect. applications which need to see the response packet in these cases, for example to distinguish between nonexistence in a secure vs insecure zone, must already use res_mkquery with res_send in order to be portable, since most if not all other implementations of res_query don't provide it.
2020-08-24make h_errno thread-localRich Felker-4/+3
the framework to do this always existed but it was deemed unnecessary because the only [ex-]standard functions using h_errno were not thread-safe anyway. however, some of the nonstandard res_* functions are also supposed to set h_errno to indicate the cause of error, and were unable to do so because it was not thread-safe. this change is a prerequisite for fixing them.
2020-08-24add tcgetwinsize and tcsetwinsize functions, move struct winsizeRich Felker-7/+29
these have been adopted for future issue of POSIX as the outcome of Austin Group issue 1151, and are simply functions performing the roles of the historical ioctls. since struct winsize is being standardized along with them, its definition is moved to the appropriate header. there is some chance this will break source files that expect struct winsize to be defined by sys/ioctl.h without including termios.h. if this happens, further changes will be needed to have sys/ioctl.h expose it too.
2020-08-22fix MUSL_LOCPATH searchRich Felker-1/+1
all path elements but the last had the final byte truncated.
2020-08-17add gettid functionRich Felker-0/+9
this is a prerequisite for addition of other interfaces that use kernel tids, including futex and SIGEV_THREAD_ID. there is some ambiguity as to whether the semantic return type should be int or pid_t. either way, futex API imposes a contract that the values fit in int (excluding some upper reserved bits). glibc used pid_t, so in the interest of not having gratuitous mismatch (the underlying types are the same anyway), pid_t is used here as well. while conceptually this is a syscall, the copy stored in the thread structure is always valid in all contexts where it's valid to call libc functions, so it's used to avoid the syscall.
2020-08-12aarch64: fix setjmp return valueSzabolcs Nagy-4/+3
longjmp should set the return value of setjmp, but 64bit registers were used for the 0 check while the type is int. use the code that gcc generates for return val ? val : 1;
2020-08-12setjmp: optimize longjmp prologuesAlexander Monakov-14/+8
Use a branchless sequence that is one byte shorter on 64-bit, same size on 32-bit. Thanks to Pete Cawley for suggesting this variant.
2020-08-11setjmp: optimize x86 longjmp epiloguesAlexander Monakov-12/+6
2020-08-11setjmp: avoid useless REX-prefix on xor %eax, %eaxAlexander Monakov-2/+2
2020-08-11setjmp: fix x86-64 longjmp argument adjustmentAlexander Monakov-6/+6
longjmp 'val' argument is an int, but the assembly is referencing 64-bit registers as if the argument was a long, or the caller was responsible for extending the argument. Though the psABI is not clear on this, the interpretation in GCC is that high bits may be arbitrary and the callee is responsible for sign/zero-extending the value as needed (likewise for return values: callers must anticipate that high bits may be garbage). Therefore testing %rax is a functional bug: setjmp would wrongly return zero if longjmp was called with val==0, but high bits of %rsi happened to be non-zero. Rewrite the prologue to refer to 32-bit registers. In passing, change 'test' to use %rsi, as there's no advantage to using %rax and the new form is cheaper on processors that do not perform move elimination.
2020-08-08prefer new socket syscalls, fallback to SYS_socketcall only if neededRich Felker-14/+27
a number of users performing seccomp filtering have requested use of the new individual syscall numbers for socket syscalls, rather than the legacy multiplexed socketcall, since the latter has the arguments all in memory where they can't participate in filter decisions. previously, some archs used the multiplexed socketcall if it was historically all that was available, while other archs used the separate syscalls. the intent was that the latter set only include archs that have "always" had separate socket syscalls, at least going back to linux 2.6.0. however, at least powerpc, powerpc64, and sh were wrongly included in this set, and thus socket operations completely failed on old kernels for these archs. with the changes made here, the separate syscalls are always preferred, but fallback code is compiled for archs that also define SYS_socketcall. two such archs, mips (plain o32) and microblaze, define SYS_socketcall despite never having needed it, so it's now undefined by their versions of syscall_arch.h to prevent inclusion of useless fallback code. some archs, where the separate syscalls were only added after the addition of SYS_accept4, lack SYS_accept. because socket calls are always made with zeros in the unused argument positions, it suffices to just use SYS_accept4 to provide a definition of SYS_accept, and this is done to make happy the macro machinery that concatenates the socket call name onto __SC_ and SYS_.
2020-08-05math: new software sqrtlSzabolcs Nagy-1/+253
same approach as in sqrt. sqrtl was broken on aarch64, riscv64 and s390x targets because of missing quad precision support and on m68k-sf because of missing ld80 sqrtl. this implementation is written for quad precision and then edited to make it work for both m68k and x86 style ld80 formats too, but it is not expected to be optimal for them. note: using fp instructions for the initial estimate when such instructions are available (e.g. double prec sqrt or rsqrt) is avoided because of fenv correctness.
2020-08-05math: add __math_invalidlSzabolcs Nagy-0/+12
for targets where long double is different from double.
2020-08-05math: new software sqrtfSzabolcs Nagy-70/+70
same method as in sqrt, this was tested on all inputs against an sqrtf instruction. (the only difference found was that x86 sqrtf does not signal the x86 specific input-denormal exception on negative subnormal inputs while the software sqrtf does, this is fine as it was designed for ieee754 exceptions only.) there is known faster method: "Computing Floating-Point Square Roots via Bivariate Polynomial Evaluation" that computes sqrtf directly via pipelined polynomial evaluation which allows more parallelism, but the design does not generalize easily to higher precisions.
2020-08-05math: new software sqrtSzabolcs Nagy-173/+179
approximate 1/sqrt(x) and sqrt(x) with goldschmidt iterations. this is known to be a fast method for computing sqrt, but it is tricky to get right, so added detailed comments. use a lookup table for the initial estimate, this adds 256bytes rodata but it can be shared between sqrt, sqrtf and sqrtl. this saves one iteration compared to a linear estimate. this is for soft float targets, but it supports fenv by using a floating-point operation to get the final result. the result is correctly rounded in all rounding modes. if fenv support is turned off then the nearest rounded result is computed and inexact exception is not signaled. assumes fast 32bit integer arithmetics and 32 to 64bit mul.
2020-08-05in hosts file lookups, honor first canonical name regardless of familyRich Felker-1/+1
prior to this change, the canonical name came from the first hosts file line matching the requested family, so the canonical name for a given hostname could differ depending on whether it was requested with AF_UNSPEC or a particular family (AF_INET or AF_INET6). now, the canonical name is deterministically the first one to appear with the requested name as an alias.
2020-08-04in hosts file lookups, use only first match for canonical nameRich Felker-2/+7
the existing code clobbered the canonical name already discovered every time another matching line was found, which will necessarily be the case when a hostname has both IPv4 and v6 definitions. patch by Wolf.
2020-08-04release 1.2.1v1.2.1Rich Felker-1/+37
2020-08-02add m68k sqrtl using native instructionRich Felker-0/+15
this is actually a functional fix at present, since the C sqrtl does not support ld80 and just wraps double sqrt. once that's fixed it will just be an optimization.
2020-07-24getentropy: fix UB if len==0Bartosz Brachaczek-1/+1
if len==0, an uninitalized variable would be returned
2020-07-06fix async-cancel-safety of pthread_cancelRich Felker-1/+4
the previous commit addressing async-signal-safety issues around pthread_kill did not fully fix pthread_cancel, which is also required (albeit rather irrationally) to be async-cancel-safe. without blocking implementation-internal signals, it's possible that, when async cancellation is enabled, a cancel signal sent by another thread interrupts pthread_kill while the killlock for a targeted thread is held. as a result, the calling thread will terminate due to cancellation without ever unlocking the targeted thread's killlock, and thus the targeted thread will be unable to exit.
2020-07-06make thread killlock async-signal-safe for pthread_killRich Felker-5/+18
pthread_kill is required to be AS-safe. that requirement can't be met if the target thread's killlock can be taken in contexts where application-installed signal handlers can run. block signals around use of this lock in all pthread_* functions which target a tid, and reorder blocking/unblocking of signals in pthread_exit so that they're blocked whenever the killlock is held.
2020-07-05fix C implementation of a_clz_32Rich Felker-1/+1
this broke mallocng size_to_class on archs without a native implementation of a_clz_32. the incorrect logic seems to have been something i derived from a related but distinct log2-type operation. with the change made here, it passes an exhaustive test. as this function is new and presently only used by mallocng, no other functionality was affected.
2020-07-02vfscanf: fix possible invalid free due to uninitialized variable useJulien Ramseier-1/+1
vfscanf() may use the variable 'alloc' uninitialized when taking the branch introduced by commit b287cd745c2243f8e5114331763a5a9813b5f6ee. Spotted by clang.
2020-06-30make mallocng the default malloc implementationRich Felker-3/+3
2020-06-30add malloc implementation selection to configureRich Felker-0/+12
the intent here is to keep oldmalloc as an option, at least for the short term, in case any users are negatively impacted in some way by mallocng and need to fallback until their issues are resolved.
2020-06-30import mallocngRich Felker-13/+938
the files added come from the mallocng development repo, commit 2ed58817cca5bc055974e5a0e43c280d106e696b. they comprise a new malloc implementation, developed over the past 9 months, to replace the old allocator (since dubbed "oldmalloc") with one that retains low code size and minimal baseline memory overhead while avoiding fundamental flaws in oldmalloc and making significant enhancements. these include highly controlled fragmentation, fine-grained ability to return memory to the system when freed, and strong hardening against dynamic memory usage errors by the caller. internally, mallocng derives most of these properties from tightly structuring memory, creating space for allocations as uniform-sized slots within individually mmapped (and individually freeable) allocation groups. smaller-than-pagesize groups are created within slots of larger ones. minimal group size is very small, and larger sizes (in geometric progression) only come into play when usage is high. all data necessary for maintaining consistency of the allocator state is tracked in out-of-band metadata, reachable via a validated path from minimal in-band metadata. all pointers passed (to free, etc.) are validated before any stores to memory take place. early reuse of freed slots is avoided via approximate LRU order of freed slots. further hardening against use-after-free and double-free, even in the case where the freed slot has been reused, is made by cycling the offset within the slot at which the allocation is placed; this is possible whenever the slot size is larger than the requested allocation.
2020-06-29add glue code for mallocng mergeRich Felker-0/+129
this includes both an implementation of reclaimed-gap donation from ldso and a version of mallocng's glue.h with namespace-safe linkage to underlying syscalls, integration with AT_RANDOM initialization, and internal locking that's optimized out when the process is single-threaded.
2020-06-26add optimized aarch64 memcpy and memsetRich Felker-0/+304
these are based on the ARM optimized-routines repository v20.05 (ef907c7a799a), with macro dependencies flattened out and memmove code removed from memcpy. this change is somewhat unfortunate since having the branch for memmove support in the large n case of memcpy is the performance-optimal and size-optimal way to do both, but it makes memcpy alone (static-linked) about 40% larger and suggests a policy that use of memcpy as memmove is supported. tabs used for alignment have also been replaced with spaces.
2020-06-25add big-endian support to ARM assembler memcpyAndre McCurdy-8/+98
Allow the existing ARM assembler memcpy implementation to be used for both big and little endian targets.
2020-06-21clear need_locks in child after forkRich Felker-0/+1
the child is single-threaded, but may still need to synchronize with last changes made to memory by another thread in the parent, so set need_locks to -1 whereby the next lock-taker will drop to 0 and prevent further barriers/locking.
2020-06-16only use memcpy realloc to shrink if an exact-sized free chunk existsRich Felker-0/+12
otherwise, shrink in-place. as explained in the description of commit 3e16313f8fe2ed143ae0267fd79d63014c24779f, the split here is valid without holding split_merge_lock because all chunks involved are in the in-use state.
2020-06-16fix memset overflow in oldmalloc race fix overhaulRich Felker-1/+1
commit 3e16313f8fe2ed143ae0267fd79d63014c24779f introduced this bug by making the copy case reachable with n (new size) smaller than n0 (original size). this was left as the only way of shrinking an allocation because it reduces fragmentation if a free chunk of the appropriate size is available. when that's not the case, another approach may be better, but any such improvement would be independent of fixing this bug.
2020-06-15fix invalid use of access function in nftwRich Felker-4/+18
access always computes result with real ids not effective ones, so it is not a valid means of determining whether the directory is readable. instead, attempt to open it before reporting whether it's readable, and then use fdopendir rather than opendir to open and read the entries. effort is made here to keep fd_limit behavior the same as before even if it was not correct.
2020-06-11add fallback a_clz_32 implementationRich Felker-0/+15
some archs already have a_clz_32, used to provide a_ctz_32, but it hasn't been mandatory because it's not used anywhere yet. mallocng will need it, however, so add it now. it should probably be optimized better, but doesn't seem to make a difference at present.
2020-06-10only disable aligned_alloc if malloc was replaced but it wasn'tRich Felker-1/+2
it both malloc and aligned_alloc have been replaced but the internal aligned_alloc still gets called, the replacement is a wrapper of some sort. it's not clear if this usage should be officially supported, but it's at least a plausibly interesting debugging usage, and easy to do. it should not be relied upon unless it's documented as supported at some later time.
2020-06-10have ldso track replacement of aligned_allocRich Felker-0/+4
this is in preparation for improving behavior of malloc interposition.
2020-06-10reintroduce calloc elison of memset for direct-mmapped allocationsRich Felker-1/+15
a new weak predicate function replacable by the malloc implementation, __malloc_allzerop, is introduced. by default it's always false; the default version will be used when static linking if the bump allocator was used (in which case performance doesn't matter) or if malloc was replaced by the application. only if the real internal malloc is linked (always the case with dynamic linking) does the real version get used. if malloc was replaced dynamically, as indicated by __malloc_replaced, the predicate function is ignored and conditional-memset is always performed.
2020-06-10move __malloc_replaced to a top-level malloc fileRich Felker-2/+3
it's not part of the malloc implementation but glue with musl dynamic linker.
2020-06-10switch to a common calloc implementationRich Felker-47/+37
abstractly, calloc is completely malloc-implementation-independent; it's malloc followed by memset, or as we do it, a "conditional memset" that avoids touching fresh zero pages. previously, calloc was kept separate for the bump allocator, which can always skip memset, and the version of calloc provided with the full malloc conditionally skipped the clearing for large direct-mmapped allocations. the latter is a moderately attractive optimization, and can be added back if needed. however, further consideration to make it correct under malloc replacement would be needed. commit b4b1e10364c8737a632be61582e05a8d3acf5690 documented the contract for malloc replacement as allowing omission of calloc, and indeed that worked for dynamic linking, but for static linking it was possible to get the non-clearing definition from the bump allocator; if not for that, it would have been a link error trying to pull in malloc.o. the conditional-clearing code for the new common calloc is taken from mal0_clear in oldmalloc, but drops the need to access actual page size and just uses a fixed value of 4096. this avoids potentially needing access to global data for the sake of an optimization that at best marginally helps archs with offensively-large page sizes.
2020-06-03move oldmalloc to its own directory under src/mallocRich Felker-1/+2
this sets the stage for replacement, and makes it practical to keep oldmalloc around as a build option for a while if that ends up being useful. only the files which are actually part of the implementation are moved. memalign and posix_memalign are entirely generic. in theory calloc could be pulled out too, but it's useful to have it tied to the implementation so as to optimize out unnecessary memset when implementation details make it possible to know the memory is already clear.
2020-06-03move __expand_heap into malloc.cRich Felker-73/+64
this function is no longer used elsewhere, and moving it reduces the number of source files specific to the malloc implementation.
2020-06-03rename memalign source file back to its proper nameRich Felker-0/+0
2020-06-03rename aligned_alloc source file back to its proper nameRich Felker-0/+0
2020-06-03reverse dependency order of memalign and aligned_allocRich Felker-10/+5
this change eliminates the internal __memalign function and makes the memalign and posix_memalign functions completely independent of the malloc implementation, written portably in terms of aligned_alloc.
2020-06-03rename aligned_alloc source fileRich Felker-0/+0
this is the first step of swapping the name of the actual implementation to aligned_alloc while preserving history follow.
2020-06-03remove stale document from malloc src directoryRich Felker-22/+0
this was an unfinished draft document present since the initial check-in, that was never intended to ship in its current form. remove it as part of reorganizing for replacement of the allocator.
2020-06-03rewrite bump allocator to fix corner cases, decouple from expand_heapRich Felker-17/+72
this affects the bump allocator used when static linking in programs that don't need allocation metadata due to not using realloc, free, etc. commit e3bc22f1eff87b8f029a6ab31f1a269d69e4b053 refactored the bump allocator to share code with __expand_heap, used by malloc, for the purpose of fixing the case (mainly nommu) where brk doesn't work. however, the geometric growth behavior of __expand_heap is not actually well-suited to the bump allocator, and can produce significant excessive memory usage. in particular, by repeatedly requesting just over the remaining free space in the current mmap-allocated area, the total mapped memory will be roughly double the nominal usage. and since the main user of the no-brk mmap fallback in the bump allocator is nommu, this excessive usage is not just virtual address space but physical memory. in addition, even on systems with brk, having a unified size request to __expand_heap without knowing whether the brk or mmap backend would get used made it so the brk could be expanded twice as far as needed. for example, with malloc(n) and n-1 bytes available before the current brk, the brk would be expanded by n bytes rounded up to page size, when expansion by just one page would have sufficed. the new implementation computes request size separately for the cases where brk expansion is being attempted vs using mmap, and also performs individual mmap of large allocations without moving to a new bump area and throwing away the rest of the old one. this greatly reduces the need for geometric area size growth and limits the extent to which free space at the end of one bump area might be unusable for future allocations. as a bonus, the resulting code size is somewhat smaller than the combined old version plus __expand_heap.