Age | Commit message (Collapse) | Author | Lines |
|
commit 201995f382cc698ae19289623cc06a70048ffe7b introduced a hack
utilizing the signedness of character constants at the preprocessor
level to avoid depending on the gcc-specific __CHAR_UNSIGNED__ predef.
while this trick works on gcc and presumably other compilers being
used, it's not clear that the behavior it depends on is actually
conforming. C11 6.4.4.4 ¶10 defines character constants as having type
int, and 6.10.1 ¶4 defines preprocessor #if arithmetic to take place
in intmax_t or uintmax_t, depending on the signedness of the integer
operand types, and it is specified that "this includes interpreting
character constants".
if character literals had type char and just promoted to int, it would
be clear that when char is unsigned they should behave as uintmax_t at
the preprocessor level. however, as written the text of the standard
seems to require that character constants always behave as intmax_t,
corresponding to int, at the preprocessor level.
since there is a good deal of ambiguity about the correct behavior and
a risk that compilers will disagree or that an interpretation may
mandate a change in the behavior, do not rely on it for defining
CHAR_MIN and CHAR_MAX correctly. instead, use the signedness of the
value (as opposed to the type) of '\xff', which will be positive if
and only if plain char is unsigned. this behavior is clearly
specified, and the specific case '\xff' is even used in an example,
under 6.4.4.4 of the standard.
|
|
POSIX requires the symlink function to fail with ENAMETOOLONG if the
link contents to be written exceed SYMLINK_MAX in length, but neither
Linux nor our syscall wrapper code enforce this. the value 255 for
SYMLINK_MAX is not meaningful and does not seem to have been motivated
by anything except perhaps a wrong assumption that a definition was
mandatory. it has been present (though moving through bits to
top-level limits.h) since the beginning of the project history.
[f]pathconf is entitled to return -1 as the limit for conf names for
which there is no hard limit, with the usual POSIX note that an
indefinite limit does not imply an infinite limit. in principle we
might should report a limit for filesystems that impose one, but such
functionality is not currently present for any of the pathconf limits,
and adding it is beyond the scope of fixing the incorrect limit.
|
|
PAGE_SIZE, NZERO, and NL_LANGMAX are XSI-shaded.
|
|
PAGESIZE is actually the version defined in POSIX base, with PAGE_SIZE
being in the XSI option. use PAGESIZE as the underlying definition to
facilitate making exposure of PAGE_SIZE conditional.
|
|
the old value of 20 was reported by Laurent Bercot as being
insufficient for a reasonable real-world usage case. actual problem
was the internal buffer used by ttyname(), but the implementation of
ttyname uses TTY_NAME_MAX, and for consistency it's best to increase
both. the new value is aligned with glibc.
|
|
fcntl.h: AT_* is not a reserved namespace so extensions cannot be
exposed by default.
langinfo.h: YESSTR and NOSTR were removed from the standard.
limits.h: NL_NMAX was removed from the standard.
signal.h: the conditional for NSIG was wrongly checking _XOPEN_SOURCE
rather than _BSD_SOURCE. this was purely a mistake; it doesn't even
match the commit message from the commit that added it.
|
|
PAGE_SIZE was hardcoded to 4096, which is historically what most
systems use, but on several archs it is a kernel config parameter,
user space can only know it at execution time from the aux vector.
PAGE_SIZE and PAGESIZE are not defined on archs where page size is
a runtime parameter, applications should use sysconf(_SC_PAGE_SIZE)
to query it. Internally libc code defines PAGE_SIZE to libc.page_size,
which is set to aux[AT_PAGESZ] in __init_libc and early in __dynlink
as well. (Note that libc.page_size can be accessed without GOT, ie.
before relocations are done)
Some fpathconf settings are hardcoded to 4096, these should be actually
queried from the filesystem using statfs.
|
|
|
|
the main goal of these changes is to address the case where an
application provides a stack of size N, but TLS has size M that's a
significant portion of the size N (or even larger than N), thus giving
the application less stack space than it expected or no stack at all!
the new strategy pthread_create now uses is to only put TLS on the
application-provided stack if TLS is smaller than 1/8 of the stack
size or 2k, whichever is smaller. this ensures that the application
always has "close enough" to what it requested, and the threshold is
chosen heuristically to make sure "sane" amounts of TLS still end up
in the application-provided stack.
if TLS does not fit the above criteria, pthread_create uses mmap to
obtain space for TLS, but still uses the application-provided stack
for actual call frame stack. this is to avoid wasting memory, and for
the sake of supporting ugly hacks like garbage collection based on
assumptions that the implementation will use the provided stack range.
in order for the above heuristics to ever succeed, the amount of TLS
space wasted on POSIX TSD (pthread_key_create based) needed to be
reduced. otherwise, these changes would preclude any use of
pthread_create without mmap, which would have serious memory usage and
performance costs for applications trying to create huge numbers of
threads using pre-allocated stack space. the new value of
PTHREAD_KEYS_MAX is the minimum allowed by POSIX, 128. this should
still be plenty more than real-world applications need, especially now
that C11/gcc-style TLS is now supported in musl, and most apps and
libraries choose to use that instead of POSIX TSD when available.
at the same time, PTHREAD_STACK_MIN has been decreased. it was
originally set to PAGE_SIZE back when there was no support for TLS or
application-provided stacks, and requests smaller than a whole page
did not make sense. now, there are two good reasons to support
requests smaller than a page: (1) applications could provide
pre-allocated stacks smaller than a page, and (2) with smaller stack
sizes, stack+TLS+TSD can all fit in one page, making it possible for
applications which need huge numbers of threads with minimal stack
needs to allocate exactly one page per thread. the new value of
PTHREAD_STACK_MIN, 2k, is aligned with the minimum size for
sigaltstack.
|
|
the missing check did not affect the default profile, since it has
both _XOPEN_SOURCE and _BSD_SOURCE defined, but it did break programs
which explicitly define _BSD_SOURCE, causing it to be the only feature
test macro present.
|
|
the old behavior of exposing nothing except plain ISO C can be
obtained by defining __STRICT_ANSI__ or using a compiler option (such
as -std=c99) that predefines it. the new default featureset is POSIX
with XSI plus _BSD_SOURCE. any explicit feature test macros will
inhibit the default.
installation docs have also been updated to reflect this change.
|
|
|
|
some software apparently uses this and breaks with musl due to
mismatching definitions...
|
|
|
|
|
|
this implementation is superior to the glibc/nptl implementation, in
that it gives true realtime behavior. there is no risk of timer
expiration events being lost due to failed thread creation or failed
malloc, because the thread is created as time creation time, and
reused until the timer is deleted.
|
|
multiple opens of the same named semaphore must return the same
pointer, and only the last close can unmap it. thus the ugly global
state keeping track of mappings. the maximum number of distinct named
semaphores that can be opened is limited sufficiently small that the
linear searches take trivial time, especially compared to the syscall
overhead of these functions.
|
|
|
|
thanks to Peter Mazinger (psm) for pointing many of these issues out
and submitting a patch on which this commit is loosely based
|
|
|
|
|
|
|