summaryrefslogtreecommitdiff
path: root/src/string/x86_64
AgeCommit message (Collapse)AuthorLines
2015-02-10x86_64/memset: simple optimizationsDenys Vlasenko-14/+16
"and $0xff,%esi" is a six-byte insn (81 e6 ff 00 00 00), can use 4-byte "movzbl %sil,%esi" (40 0f b6 f6) instead. 64-bit imul is slow, move it as far up as possible so that the result (rax) has more time to be ready by the time we start using it in mem stores. There is no need to shuffle registers in preparation to "rep movs" if we are not going to take that code path. Thus, patch moves "jump if len < 16" instructions up, and changes alternate code path to use rdx and rdi instead of rcx and r8. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
2013-08-01optimized memset asm for i386 and x86_64Rich Felker-0/+41
the concept of both versions is the same; they differ only in details. for long runs, they use "rep movsl" or "rep movsq", and for small runs, they use a trick, writing from both ends towards the middle, that reduces the number of branches needed. in addition, if memset is called multiple times with the same length, all branches will be predicted; there are no loops. for larger runs, there are likely faster approaches than "rep", at least on some cpu models. for 32-bit, it's unlikely that there is any faster approach that does not require non-baseline instructions; doing anything fancier would require inspecting cpu capabilities. for 64-bit, there may very well be faster versions that work on all models; further optimization could be explored in the future. with these changes, memset is anywhere between 50% faster and 6 times faster, depending on the cpu model and the length and alignment of the destination buffer.
2012-09-10asm for memmove on i386 and x86_64Rich Felker-0/+15
for the sake of simplicity, I've only used rep movsb rather than breaking up the copy for using rep movsd/q. on all modern cpus, this seems to be fine, but if there are performance problems, there might be a need to go back and add support for rep movsd/q.
2012-08-11memcpy asm for i386 and x86_64Rich Felker-0/+22