Commit Graph

71 Commits

Author SHA1 Message Date
Daniel Micay
0436227092 no longer need glibc pthread_atfork workaround 2020-03-29 11:40:12 -04:00
Daniel Micay
449962e044 disable obsolete glibc extensions elsewhere 2020-02-03 08:39:19 -05:00
Daniel Micay
a28da3c65a use prefix for extended mallinfo functions 2019-09-07 18:33:24 -04:00
Daniel Micay
d37657e125 enable llvm-include-order tidy check 2019-08-18 02:39:55 -04:00
Daniel Micay
d80919fa1e substantially raise the arbitrary arena limit 2019-07-12 03:43:33 -04:00
Daniel Micay
410e9efb93 extend configuration sanity checks 2019-07-11 17:09:48 -04:00
Daniel Micay
a32e26b8e9 avoid trying to use mremap outside of Linux 2019-07-05 21:59:44 -04:00
Daniel Micay
bc75c4db7b realloc: use copy_size to check for canaries
This avoids unnecessarily copying the canary when doing a realloc from a
small size to a large size. It also avoids trying to copy a non-existent
canary out of a zero-size allocation, which are memory protected.
2019-06-17 00:28:10 -04:00
Daniel Micay
12525f2861 work around old glibc releases without threads.h 2019-06-06 08:10:57 -04:00
Daniel Micay
409a639312 provide working malloc_info outside Android too 2019-04-19 16:56:07 -04:00
Daniel Micay
494436c904 implement options handling for malloc_info 2019-04-19 16:23:14 -04:00
Daniel Micay
a13db3fc68 initialize size class CSPRNGs from init CSPRNG
This avoids making a huge number of getrandom system calls during
initialization. The init CSPRNG is unmapped before initialization
finishes and these are still reseeded from the OS. The purpose of the
independent CSPRNGs is simply to avoid the massive performance hit of
synchronization and there's no harm in doing it this way.

Keeping around the init CSPRNG and reseeding from it would defeat the
purpose of reseeding, and it isn't a measurable performance issue since
it can just be tuned to reseed less often.
2019-04-15 06:50:24 -04:00
Daniel Micay
f115be8392 shrink initial region table size to fit in 1 page 2019-04-15 00:04:00 -04:00
Daniel Micay
e7eeb3f35c avoid reading thread_local more than once 2019-04-14 20:26:14 -04:00
Daniel Micay
7e465c621e use allocate_large directly in large remap path 2019-04-14 19:46:22 -04:00
Daniel Micay
1c899657c1 add is_init check to mallinfo functions 2019-04-14 19:12:38 -04:00
Daniel Micay
8774065b13 fix non-init size for malloc_object_size extension 2019-04-14 19:01:25 -04:00
Daniel Micay
84a25ec83e fix build with CONFIG_STATS enabled 2019-04-11 00:51:34 -04:00
Daniel Micay
d4b8fee1c4 allow using the largest slab allocation size 2019-04-10 16:54:58 -04:00
Daniel Micay
086eb1fee4 at a final spacing class of 1 slot size classes 2019-04-10 16:32:24 -04:00
Daniel Micay
7a89a7b8c5 support for slabs with 1 slot for largest sizes 2019-04-10 16:26:49 -04:00
Daniel Micay
6c31f6710a support extended range of small size classes 2019-04-10 08:31:51 -04:00
Daniel Micay
d5f18c47b3 micro-optimize initialization with arenas 2019-04-10 08:07:24 -04:00
Daniel Micay
62c73d8b41 harden thread_arena check 2019-04-10 07:40:29 -04:00
Daniel Micay
d5c00b4d0d disable current in-place growth code path for now 2019-04-09 19:20:34 -04:00
Daniel Micay
d5c1bca915 use round-robin assignment to arenas
The initial implementation was a temporary hack rather than a serious
implementation of random arena selection. It may still make sense to
offer it but it should be implemented via the CSPRNG instead of this
silly hack. It would also make sense to offer dynamic load balancing,
particularly with sched_getcpu().

This results in a much more predictable spread across arenas. This is
one place where randomization probably isn't a great idea because it
makes the benefits of arenas unpredictable in programs not creating a
massive number of threads. The security benefits of randomization for
this are also quite small. It's not certain that randomization is even a
net win for security since it's not random enough and can result in a
more interesting mix of threads in the same arena for an attacker if
they're able to attempt multiple attacks.
2019-04-09 16:54:14 -04:00
Daniel Micay
9a0de626fc move stats accounting to utility functions 2019-04-09 03:57:44 -04:00
Daniel Micay
9453332e57 remove redundant else block 2019-04-09 00:06:17 -04:00
Daniel Micay
a4cff7a960 factor out slab memory_set_name into label_slab 2019-04-07 18:02:56 -04:00
Daniel Micay
ef90f404a6 add sanity check for stats option 2019-04-07 09:06:03 -04:00
Daniel Micay
e0891c8cfc implement the option of large size classes
This extends the size class scheme used for slab allocations to large
allocations. This drastically improves performance for many real world
programs using incremental realloc growth instead of using proper growth
factors. There are 4 size classes for every doubling in size, resulting
in a worst case of ~20% extra virtual memory being reserved and a huge
increase in performance for pathological cases. For example, growing
from 4MiB to 8MiB by calling realloc in increments of 32 bytes will only
need to do work beyond looking up the size 4 times instead of 1024 times
with 4096 byte granularity.
2019-04-07 08:52:17 -04:00
Daniel Micay
c68de6141d factor out duplicated code in malloc/realloc 2019-04-07 05:48:10 -04:00
Daniel Micay
ce36d0c826 split out allocate_large function 2019-04-07 05:44:09 -04:00
Daniel Micay
3d18fb8074 implement Android M_PURGE mallopt via malloc_trim 2019-04-07 03:35:26 -04:00
Daniel Micay
4f08e40fe5 move thread sealing implementation 2019-04-07 00:50:26 -04:00
Daniel Micay
55891357ff clean up the exported API section of the code 2019-04-07 00:36:53 -04:00
Daniel Micay
491ce6b0b1 no need to provide valloc and pvalloc on Android 2019-04-07 00:31:09 -04:00
Daniel Micay
1eed432b9a limit more glibc cruft to that environment 2019-04-07 00:30:05 -04:00
Daniel Micay
27a4c883ce extend stats with nmalloc and ndalloc 2019-04-06 23:19:03 -04:00
Daniel Micay
e94fe50a0d include zero byte size class in stats
The allocations don't consume any actual memory, but it does still use
up the virtual memory assigned to the size class and requires metadata.
2019-04-06 22:43:56 -04:00
Daniel Micay
712748aaa8 add implementation of Android mallinfo extensions
These are used internally by Bionic to implement malloc_info.
2019-04-06 22:39:01 -04:00
Daniel Micay
0f107cd2a3 only provide malloc_info stub for glibc
This has a proper implementation in Bionic outside of the malloc
implementation via the extended mallinfo API.
2019-04-06 22:01:12 -04:00
Daniel Micay
350d0e5fd2 add real mallinfo implementation for Android
Android Q uses the mallinfo implementation in the ART GC:

c220f98180
1575267302
2019-04-06 20:54:26 -04:00
Daniel Micay
df9650fe64 conditionally include threads.h 2019-03-26 01:28:27 -04:00
Daniel Micay
98deb9de52 relabel malloc read-only after init data 2019-03-25 20:34:10 -04:00
Daniel Micay
fc8f2c3b60 move pthread_atfork wrapper to util header 2019-03-25 17:16:52 -04:00
Daniel Micay
b5187a0aff only use __register_atfork hack for old glibc 2019-03-25 17:16:22 -04:00
Daniel Micay
c5e911419d add initial implementation of arenas 2019-03-25 14:59:50 -04:00
Daniel Micay
55769496dc move hash_page to pages.h 2019-03-25 14:54:22 -04:00
Daniel Micay
13de480bde rename quarantine bitmap field for clarity 2019-03-24 20:24:40 -04:00