Age | Commit message (Collapse) | Author | Lines |
|
Author: Xiaojuan Zhai <zhaixiaojuan@loongson.cn>
Author: Meidan Li <limeidan@loongson.cn>
Author: Guoqi Chen <chenguoqi@loongson.cn>
Author: Xiaolin Zhao <zhaoxiaolin@loongson.cn>
Author: Fan peng <fanpeng@loongson.cn>
Author: Jiantao Shan <shanjiantao@loongson.cn>
Author: Xuhui Qiang <qiangxuhui@loongson.cn>
Author: Jingyun Hua <huajingyun@loongson.cn>
Author: Liu xue <liuxue@loongson.cn>
Author: Hongliang Wang <wanghongliang@loongson.cn>
|
|
commit f47a5d400b8ffa26cfc5b345dbff52fec94ac7f3 overlooked that
strtoul was responsible for setting p to a const-laundered copy of the
format string pointer f, even in the case where there was no number to
parse. by making the call conditional on isdigit, that copy was lost.
the logic here is a mess and should be cleaned up, but for now, this
seems to be the least invasive change that undoes the breakage.
|
|
depending on contents of the LC_TIME locale, log messages could be
malformatted (especially if the ABMON strings contain non-alphabetic
characters) or the subsequent code could invoke undefined behavior,
via passing a timebuf[] with unspecified contents to snprintf, if
the translated ABMON string did not fit in the 16-byte timebuf.
this does not appear to be a security-relevant bug, as locale loading
functionality is intentionally not available to set*id programs -- the
MUSL_LOCPATH environment variable is ignored when libc.secure is true,
and custom locales are not loadable without it.
|
|
|
|
having these constants be static was unnecessary, so just remove the
static.
this error should have been caught by compilers, but recent versions
of both gcc and clang accept these as "other forms of constant
expressions" which the C standard allows.
|
|
Previously, __riscv_flush_icache would not work correctly as
__vdso_flush_icache had a wrong symbol version. Fix this by correcting
symbol version.
Fixes: 0a48860c27a8 ("add riscv64 architecture support")
|
|
|
|
the ppoll function has been accepted as a future part of the standard
as the outcome of Austin Group tracker issue 1263. at some point it
should be exposed unconditionally, but for now, expose it in the
default feature profile.
|
|
the ppoll function has been accepted as a future part of the standard
as the outcome of Austin Group tracker issue 1263. move the source
file to reflect this.
|
|
this was a POSIX requirement that was always in conflict with ISO C,
which specified a well-defined behavior for snprintf and swprintf so
long as the actual number of bytes/characters produced did not exceed
INT_MAX.
I originally raised this conflict for snprintf with the Austin Group
as tracker issue 761, which was never resolved. it was later reported
again as issue 1219, and as a result the conflicting requirement has
been removed.
the corresponding issue with swprintf does not seem to have been
addressed, but as the same reasoning applies to it, I am removing the
limitation on n for swprintf as well.
|
|
strtoul will consume leading whitespace or sign characters, which are
not valid in this context, thereby accepting invalid field specifiers.
so, avoid calling it unless there is a number to parse as the width.
|
|
this matters because the kernel-provided mtab only escapes tabs,
spaces, newlines, and backslashes. it leaves carriage returns, form
feeds, and vertical tabs literal.
|
|
As entries in mtab are delimited by spaces, whitespace characters
are escaped as octal sequences. When reading them out, we have to
unescape these sequences to get the proper string.
|
|
this style is preferred because it allows the code to be
compile-checked even on archs where it is not used.
|
|
this is contrary to the spec as written, which requires %lc to behave
as if it were %ls on a 2-wchar_t buffer containing the argument and
zero. however, apparently no other implementations conform to the spec
as written, and in response to Austin Group issue #1647, WG14 chose to
align with existing practice and have %lc produce output for this case.
|
|
The name resolution would abort when getting more than 63 records per
request, due to what seems to be a left-over from the original code.
This check was non-breaking but spurious prior to TCP fallback
support, since any 512-byte packet with more than 63 records was
necessarily malformed. But now, it wrongly rejects valid results.
Reported by Daniel Stefanik in Alpine Linux aports issue 15320.
|
|
AT_NO_AUTOMOUNT is implied for stat/lstat/fstatat syscalls since Linux
3.1 (commit b6c8069d3577481390b3f24a8434ad72a3235594). However, this
is not the case for statx syscall, which defaults to automounting, so
this flag must be passed explicitly when statx is used to implement
stat-like functions.
This change affects only arches which use 32-bit seconds in struct kstat,
as well as out-of-tree/future ports to arches which lack SYS_fstatat.
|
|
The lifetime of the compound literal ends after the "if" statement's
implicit block. gcc also warns about this.
|
|
C11 6.11.5p1:
> The placement of a storage-class specifier other than at the
> beginning of the declaration specifiers in a declaration is an
> obsolescent feature.
gcc also warns about this.
|
|
If __synccall() fails to capture all threads because tkill fails for
some reason other than EAGAIN, then the callback given will never be
executed, so nothing will ever overwrite the initial value. So that is
the value that will be returned from the function. The previous setting
of 1 is not a valid value for setuid() et al. to return.
I chose -EAGAIN since I don't know the reason the synccall failed ahead
of time, but EAGAIN is a specified error code for a possibly temporary
failure in setuid().
|
|
The code intends for the sem_post() in line 97 (now 98) to only unblock
target threads waiting on line 29. But after the first thread is
released, the next sem_post() might also unblock a thread waiting on
line 36. That would cause the thread to return to the execution of user
code before all threads are done, leading to user code being executed in
a mixed-credentials environment.
What's more, if this happens more than once, then the mass release on
line 110 (now line 111) will cause multiple threads to execute the
callback at the same time, and the callbacks are currently not written
to cope with that situation.
Adding another semaphore allows the caller to say explicitly which
threads it wants to release.
|
|
when the result count was zero, glob was ignoring a possible
GLOB_ABORTED error code and returning GLOB_NOMATCH. whether this
happened could be nondeterministic and dependent on the order of
dirent enumeration, in cases where multiple matches were present and
only some produced errors.
caught by Tor's test_util_glob.
|
|
This is the only missing part in struct statvfs. The LSB calls
[f]statfs() deprecated, and its weird types are definitely
off-putting. However, its use is required to get f_type.
Instead, allocate one of the six spares to f_type, copied directly
from struct statfs. This then becomes a small extension to the
standard interface on Linux, instead of two different interfaces, one
of which is quite odd due to being an ABI type, and there no longer is
any reason to use statfs().
The underlying kernel type is a mess, but all architectures agree on u32
(or more) for the ABI, and all filesystem magicks are 32-bit integers.
Since commit 6567db65f495cf7c11f5c1e60a3e54543d5a69bc (prior to
1.0.0), the spare slots have been zero-filled, so on all versions that
may be reasonably be encountered in the wild, applications can rely on
a nonzero f_type as indication that the new field has been filled in.
|
|
powl used >= LDBL_MAX as infinity check, but LDBL_MAX is finite, so
this can cause wrong results e.g. powl(LDBL_MAX, 0.5) returned inf
or powl(2, LDBL_MAX) returned inf without raising overflow.
huge y values (close to LDBL_MAX) could cause intermediate results to
overflow (computing y * log2(x) with more than long double precision)
and e.g. powl(0.5, 0x1p16380L) or powl(10, 0x1p16380L) returned nan.
this is fixed by handling huge y early since that always overflows or
underflows.
reported by Paul Zimmermann against expl10 (which uses powl).
|
|
acosh(x) is nan for x < 1, but x < 0 cases were not handled specially
and acoshl gave wrong result for some -0x1p32 < x < -2 values, e.g.:
acoshl(-0x1p20) returned -inf,
acoshl(-0x1.4p20) returned -0x1.db365758403aa9acp+0L,
fixed by checking the sign bit and handling it specially.
reported by Paul Zimmermann.
|
|
the __dns_parse code used by the stub resolver traditionally included
code to reject label pointers to offsets past a 512 byte limit,
despite never processing the label contents, only stepping over them.
when commit 51d4669fb97782f6a66606da852b5afd49a08001 added support for
tcp fallback, this limit was overlooked, and as a result, it was at
least theoretically possible for some valid large answers to be
rejected on account of these offsets.
since the limit was never serving any useful purpose, just remove it.
|
|
in the event of chained CNAMEs, the answer to a query will contain the
entire CNAME chain, not just one CNAME record. previously, the answer
buffer size had been chosen to admit a maximal-length CNAME, but only
one. a moderate-length chain could fill the available 768 bytes
leaving no room for an actual address answering the query.
while the DNS RFCs do not specify any limit on the length of a CNAME
chain, or any reasonable behavior is the chain exceeds the entire 64k
possible message size, actual recursive servers have to impose a
limit, and a such, for all practical purposes, chains longer than this
limit are not usable. it turns out BIND has a hard-coded limit of 16,
and Unbound has a default limit of 11.
assuming the recursive server makes use of "compression" (pointers),
each maximal-length CNAME record takes at most 268 bytes, and thus any
chain up to length 16 fits in at most 4288 bytes.
this patch increases the answer buffer size to preserve the original
intent of having 512 bytes available for address answers, plus space
needed for a maximal CNAME chain, for a total of 4800 bytes. the
resulting size of 9600 bytes for two queries (A+AAAA) is still well
within what is reasonable to place in automatic storage.
|
|
the extra terms 3 and LDBL_MANT_DIG/4 are remnants of a proto-musl
implementation of printf where the sign/prefix and floating point
conversions were performed naively into this buffer. having them there
obscures the actual intended buffer size (sufficient to hold between 2
and 3 octal digits per byte, rounded up to 3 for simplicity) and
interferes with upcoming work to add C2x binary formats which would
otherwise be stuck having to explain a similar fix to buffer size as
part of an unrelated change.
|
|
%c takes an argument of type int, not char, and %lc/%C takes an
argument of type wint_t (unsigned), not int.
for most cases, this makes no practical difference, but since wide
printf variants convert narrow %c format specifiers via btowc,
interpreting the promoted-to-int unsigned char value passed in as a
(signed, on most archs) char causes 255 to get collapsed to EOF and
interpreted as such by btowc.
this is only relevant in the byte-based C locale, so prior to commit
f22a9edaf8a6f2ca1d314d18b3785558279a5c03, there was no observable
distinction in behavior. for UTF-8, all bytes which might be negative
when interpreted as char are encoding errors when used with %c/btowc.
|
|
the clone() function has been effectively unusable since it was added,
due to producing a child process with inconsistent state. in
particular, the child process's thread structure still contains the
tid, thread list pointers, thread count, and robust list for the
parent. this will cause malfunction in interfaces that attempt to use
the tid or thread list, some of which are specified to be
async-signal-safe.
this patch attempts to make clone() consistent in a _Fork-like sense.
as in _Fork, when the parent process is multi-threaded, the child
process inherits an async-signal context where it cannot call
AS-unsafe functions, but its context is now intended to be safe for
calling AS-safe functions. making clone fork-like would also be a
future option, if it turns out that this is what makes sense to
applications, but it's not done at this time because the changes would
be more invasive.
in the case where the CLONE_VM flag is used, clone is only vfork-like,
not _Fork-like. in particular, the child will see itself as having the
parent's tid, and cannot safely call any libc functions but one of the
exec family or _exit.
handling of flags and variadic arguments is also changed so that
arguments are only consumed with flags that indicate their presence,
and so that flags which produce an inconsistent state are disallowed
(reported as EINVAL). in particular, all libc functions carry a
contract that they are only callable with ABI requirements met, which
includes having a valid thread pointer to a thread structure that's
unique within the process, and whose contents are opaque and only able
to be setup internally by the implementation. the only way for an
application to use flags that violate these requirements without
executing any libc code is to perform the syscall from
application-provided asm.
|
|
apparently Linux clears the registered exit futex address on fork.
this means that, if after forking the child process becomes
multithreaded and the original thread exits, the thread list will
never be unlocked, and future attempts to use the thread list will
deadlock.
re-register the exit futex address after _Fork in the child to ensure
that it's preserved.
|
|
mbrtowc truncates n to unsigned int when storing its copy.
If n > UINT_MAX and the locale is not POSIX, the function will
return a wrong value greater than UINT_MAX on the success path.
|
|
analogous to the bug in wcscmp and wcsncmp that was fixed in commit
07616721f1fa6cb215ffbef23441cae80412484f.
|
|
The nl_type and nl_arg arrays defined in vfwprintf may be accessed
with an index up to and including NL_ARGMAX, but they are only of size
NL_ARGMAX, meaning they may be written to or read from 1 element too
far.
|
|
Resource usage data is filled by the kernel only when wait4 returns
a pid, i.e. a positive value.
Commit 5850546e9669f793aab61dfc7c4f2c1ff35c4b29 introduced this bug,
possibly because of copy-pasting from getrusage.
|
|
For time64 support, musl normally defines SYS_foo to the time32 variant
of that syscall on arches that have it, and to the time64 variant
otherwise, so that "SYS_foo == SYS_foo_time64" implies that the arch is
time64-only. However, SYS_semtimedop is an odd case: some arches define
only SYS_semtimedop_time64, yet they are not time64-only, because the
time32 variant is provided via SYS_ipc instead. For such arches,
defining SYS_semtimedop to SYS_semtimedop_time64 would break the
implication above, so commit 4bbd7baea7c8538b3fb8e30f7b022a1eee071450
doesn't do this. Commit eb2e298cdc814493a6ced8c05cf0d0f5cccc8b63
attempts to detect time64-only arches by checking that both
SYS_semtimedop and SYS_ipc are undefined, but this doesn't work for
x32, because it's a time64-only arch that does define SYS_semtimedop.
As a result, 32-bit timeouts trigger the fallback path that passes
a 32-bit timespec to the kernel while it expects a 64-bit one, so
the effective tv_sec is formed by interpreting 32-bit tv_sec and
tv_nsec as a single long long, and the effective tv_nsec is whatever
is located in the next 64 bits of the stack.
Fix this by expanding the time64-only check to include arches where
SYS_semtimedop is the time64 variant of the syscall.
|
|
When an option that requires an argument is the last character of
argv[argc-1], getopt computes argv[argc] + optpos. While optpos
is always zero in this case, adding it to null pointer is still
undefined.
|
|
If lstat/stat fails with EACCES, st is left uninitialized, but its
st_dev/st_ino fields are then used in several places:
* for FTW_MOUNT check (in practice typically results in a false
positive and an early return)
* for copying to the new struct history (though the struct is not used
afterwards since we don't recurse in this case)
* for cycle detection check (could theoretically result in a false
positive and an early return)
To avoid adding FTW_NS checks to all these places, fix this by
zero-initializing st_dev/st_ino (which can never match an existing
dentry due to zero inode being reserved in Linux), and check for FTW_NS
only when handling FTW_MOUNT since we need two valid dentries there.
|
|
The received length field in the message may be greater than the
size of the 'answer' buffer in which the message resides. Currently,
ABUF_SIZE is 768. And if we get a larger 'alens[i]', it will result
in an out-of-bounds reading in __dns_parse().
To fix this, limit the length to the size of the received buffer.
|
|
the buffer-flush function did not account for mbtowc returning 0
rather than 1 when converting the nul character. this prevented
advancing past it, instead repeatedly converting it into the output
wide character string until the max output length was exhausted.
|
|
this is purely aesthetic and should not affect code generation or
functionality.
|
|
commit d42269d7c85308abdbf8cee38b1a1097249eb38b appropriated the
stream error flag temporarily to let the printf family of functions
suppress further output attempts after encountering a write error.
since the wide printf code relies on (narrow) vfprintf to print
padding and numeric conversions, a hack was put in vfprintf not to
clear the initial error status unless the stream is narrow oriented.
this was okay, because calling vfprintf on a wide-oriented stream
(outside of internal use by the implementation) produces undefined
behavior. however, it was highly non-obvious to anyone reading the
wide printf code, where the calls to fprintf without first checking
for error status appeared erroneous.
this patch removes all direct use of fprintf from the wide printf
core, except in the numeric conversions case where it was already
checked before starting processing of the directive that the error
status is not set. the other calls, which were performing padding, are
replaced by a new pad() helper function, which performs the check and
abstracts out the mechanism of writing the padding.
direct use of the error flag is also replaced by ferror, which is
defined as a macro in stdio_impl.h, expanding directly to the flag
check with no call or locking overhead.
|
|
unlike with wide printf variants, encoding errors are not a vector by
which this bug is reachable, and the out() helper function already
ensured that no further output could be written after an output error,
transient or otherwise. however, the %n specifier could still be
processed after an error, yielding a side effect that wrongly implied
output had succeeded.
due to buffering effects, it's still possible for %n to show output as
having "succeeded", but for it never to appear on the underlying file
due to an error at flush time. this change, however, ensures that
processing of %n does not conflict with any error which has already
been seen.
|
|
this fixes a broader bug for which a special case was reported by
Bruno Haible, in the form of %n getting processed (and reporting the
number of wide characters which would have been written, but weren't)
after an encoding error (EILSEQ). in addition to the %n case, some but
not all of the format specifiers continued to attempt output after an
error. in particular, %c, %lc, and %s all used fputwc directly without
any check for error status.
as long as the error condition was permanent rather than transient,
these write attempts had no visible side effects, but in theory it
could be visible, for example with EAGAIN/EWOULDBLOCK or ENOSPC, if
the condition precluding output came to an end. this could produce
output with missing non-final data, rather than just truncated output,
albeit with the function still returning -1 as expected to report an
error.
to fix this, a check is added to stop processing of any new directive
(including %n) if the stream is already in error state, and direct use
of fputwc is replaced with calls to the out() helper function, which
checks for error status.
note that fprintf is also used directly without checking error status,
but due to how commit d42269d7c85308abdbf8cee38b1a1097249eb38b
previously attempted to solve the issue of output after error, the
call to fprintf does not attempt to write anything when the
wide-oriented stream is already in error state. this is non-obvious,
and is quite a hack, so it should be changed, but I've left it alone
for now to make the bug fix commit itself as non-invasive as possible.
|
|
since the code path for %c was already doing it right, and the logic
is identical, condense them into a single case.
|
|
this function was overlooked during the time64 transition, probably as
a result of not having any time-related types in its application-side
interface. however, for archs that lack the traditional poll syscall
and have only ppoll, it used timespec as part of its interface with
the kernel: the millisecond timeout was converted to a timespec to
pass to SYS_ppoll. this is a type/ABI mismatch on 32-bit archs with
legacy time32 syscalls.
only one supported arch, or1k, is affected. all of the others either
have SYS_poll, or are 64-bit.
rather than using timespec, define a type locally to match what the
kernel expects. the condition (SYS_ppoll_time64 == SYS_ppoll),
comparable to conditions used elsewhere in timespec-handling code,
evaluates true for "natively time64" 32-bit archs including x32,
future riscv32, and all future 32-bit archs (via definitions in
internal syscall.h). otherwise, the arch is either 64-bit or has
syscalls that take the legacy type, and in either case "long" is
correct.
this fix is based on bug report and proposal by Alexey Izbyshev but
with a different approach to the changes to minimize the contextual
knowledge needed for a reader to understand the source file.
|
|
If the (normalized) timeout passed to select exceeds INT_MAX seconds on
an arch with SYS_pselect6_time64 and the kernel is too old to support
time64 syscalls, the timeout is implicitly converted to (32-bit) long on
the fallback path, losing its upper 32 bits and potentially becoming a
small positive value, violating the intended semantics, or even
a negative value, causing the fallback syscall failure. Fix this by
saturating the timeout at INT_MAX as done in other time64 fallback
cases.
|
|
this is the best-effort fallback path for kernels that can't actually
support the dup3 functionality. it was setting FD_CLOEXEC flag on the
target fd (new) even if the dup2 operation failed. normally that
shouldn't happen under correct usage, but it's possible if the source
fd is not open or intentionally invalid (e.g. -1).
|
|
our dup3 code wrongly skipped directly to making the SYS_dup2 syscall
whenever the O_CLOEXEC bit of flags was not set. this is incorrect if
any new flags are ever added, as it would silently ignore them rather
than failing with an error.
archs which lack SYS_dup2 were unaffected.
adjust the logic so that SYS_dup3 is attempted whenever flags is
nonzero, and explicitly fail with EINVAL if SYS_dup3 is unavailable
and there are any unknown flags.
|
|
kernels using the fallback have an inherent close-on-exec race
condition and as such support for them is only best-effort anyway.
however, ignoring potential new flags is still very bad behavior.
instead, fail with EINVAL.
|