Commit Graph

102 Commits

Author SHA1 Message Date
Daniel Micay
29b09648d6 avoid undefined clz and shift in edge cases
This is triggered when get_large_size_class is called with a size in the
range [1,4]. This can occur with aligned_alloc(8192, size). In practice,
it doesn't appear to cause any harm, but we shouldn't have any undefined
behavior for well-defined usage of the API. It also occurs if the caller
passes a pointer outside the slab region to free_sized but the expected
size is in the range [1,4]. That usage of free_sized is already going to
be considered undefined, but we should avoid undefined behavior in the
caller from triggering more undefined behavior when it's avoidable.
2021-02-16 08:31:17 -05:00
Thibaut Sautereau
1984cb3b3d malloc_object_size: avoid fault for invalid region
It's the region pointer that can be NULL here, and p was checked at the
beginning of the function.
2021-02-10 17:43:36 -05:00
Thibaut Sautereau
76860c72e1 malloc_usable_size: clean abort on invalid region
It's the region pointer that can be NULL here, and p was checked at the
beginning of the function. Also fix the test accordingly.
2021-02-10 17:41:17 -05:00
Daniel Micay
5275563252 fix C++ sized deallocation check false positive
This is a compatibility issue triggered when both slab canaries and the
C++ allocator overloads providing sized deallocation checks are enabled.

The boundary where slab allocations are turned into large allocations
due to not having room for the canary in the largest slab allocation
size class triggers a false positive in the sized deallocation check.
2021-01-06 00:18:59 -05:00
Daniel Micay
b90f650153 fix sized deallocation check with large sizes
The CONFIG_CXX_ALLOCATOR feature enables sanity checks for sized
deallocation and this wasn't updated to handle the introduction of
performing size class rounding for large sizes.
2020-11-10 13:53:32 -05:00
Daniel Micay
b072022022 perform init sanity checks before MPK unsealing 2020-10-06 17:34:35 -04:00
Daniel Micay
2bb1c39d31 add MPK support for stats retrieval functions 2020-10-06 17:32:25 -04:00
Daniel Micay
0bf18b7c26 optimize malloc_usable_size enforce_init 2020-10-03 15:10:49 -04:00
Daniel Micay
178d4f320f harden checks for uninitialized usage 2020-10-02 15:06:29 -04:00
Daniel Micay
483b1d7b8b empty malloc_info output when stats are disabled 2020-09-17 17:42:18 -04:00
Daniel Micay
96eca21ac5 remove thread_local macro workaround glibc < 2.28 2020-09-17 17:38:40 -04:00
Daniel Micay
b4bbd09f07 change label for quarantined large allocations 2020-09-17 16:56:01 -04:00
Daniel Micay
a88305c01b support disabling region quarantine 2020-09-17 16:53:34 -04:00
Daniel Micay
85c5c3736c add stats tracking to special large realloc paths 2020-09-17 16:29:13 -04:00
Daniel Micay
96a9bcf3a1 move deprecated glibc extensions to the bottom 2020-09-17 16:20:05 -04:00
Daniel Micay
41fb89517a simplify malloc_info code 2020-09-17 16:10:02 -04:00
Daniel Micay
50e0f1334c add is_init check to malloc_info 2020-09-17 16:07:10 -04:00
Daniel Micay
9fb2791af2 add is_init check to h_mallinfo_arena_info 2020-09-17 16:00:03 -04:00
anupritaisno1
8974af86d1 hardened malloc: iterate -> malloc_iterate
Signed-off-by: anupritaisno1 <www.anuprita804@gmail.com>
2020-09-15 00:37:23 -04:00
Daniel Micay
dd7291ebfe better wording for page size mismatch error 2020-08-05 18:10:53 -04:00
Daniel Micay
bcb93cab63 avoid an ifdef 2020-08-04 17:22:03 -04:00
rwarr627
f214bd541a added check for if small allocations are free 2020-06-17 23:29:30 -04:00
Daniel Micay
722974f4e9 remove trailing whitespace 2020-06-13 09:59:50 -04:00
rwarr627
577524798e calculates offset from start for small allocations 2020-06-13 01:27:32 -04:00
Daniel Micay
467ba8440f add comment explaining slab cache size 2020-05-24 09:36:43 -04:00
Daniel Micay
067b3c864f set slab cache sizes based on the largest slab 2020-05-24 09:31:02 -04:00
Daniel Micay
4a6bbe445c limit cached slabs based on max size class 2020-05-13 01:05:37 -04:00
Daniel Micay
b672316bc7 use const for memory_corruption_check_small
This currently causes a warning (treated as an error) on Android where
malloc_usable_size uses a const pointer.
2020-04-30 16:06:32 -04:00
Daniel Micay
029a2edf28 remove trailing whitespace 2020-04-30 16:03:45 -04:00
rwarr627
35bd7cd76d added memory corruption checking to malloc_usable_size for slab allocations 2020-04-29 18:06:15 -04:00
Daniel Micay
19365c25d6 remove workaround for Linux kernel MPK fork bug 2020-04-24 02:51:39 -04:00
Daniel Micay
0436227092 no longer need glibc pthread_atfork workaround 2020-03-29 11:40:12 -04:00
Daniel Micay
449962e044 disable obsolete glibc extensions elsewhere 2020-02-03 08:39:19 -05:00
Daniel Micay
a28da3c65a use prefix for extended mallinfo functions 2019-09-07 18:33:24 -04:00
Daniel Micay
d37657e125 enable llvm-include-order tidy check 2019-08-18 02:39:55 -04:00
Daniel Micay
d80919fa1e substantially raise the arbitrary arena limit 2019-07-12 03:43:33 -04:00
Daniel Micay
410e9efb93 extend configuration sanity checks 2019-07-11 17:09:48 -04:00
Daniel Micay
a32e26b8e9 avoid trying to use mremap outside of Linux 2019-07-05 21:59:44 -04:00
Daniel Micay
bc75c4db7b realloc: use copy_size to check for canaries
This avoids unnecessarily copying the canary when doing a realloc from a
small size to a large size. It also avoids trying to copy a non-existent
canary out of a zero-size allocation, which are memory protected.
2019-06-17 00:28:10 -04:00
Daniel Micay
12525f2861 work around old glibc releases without threads.h 2019-06-06 08:10:57 -04:00
Daniel Micay
409a639312 provide working malloc_info outside Android too 2019-04-19 16:56:07 -04:00
Daniel Micay
494436c904 implement options handling for malloc_info 2019-04-19 16:23:14 -04:00
Daniel Micay
a13db3fc68 initialize size class CSPRNGs from init CSPRNG
This avoids making a huge number of getrandom system calls during
initialization. The init CSPRNG is unmapped before initialization
finishes and these are still reseeded from the OS. The purpose of the
independent CSPRNGs is simply to avoid the massive performance hit of
synchronization and there's no harm in doing it this way.

Keeping around the init CSPRNG and reseeding from it would defeat the
purpose of reseeding, and it isn't a measurable performance issue since
it can just be tuned to reseed less often.
2019-04-15 06:50:24 -04:00
Daniel Micay
f115be8392 shrink initial region table size to fit in 1 page 2019-04-15 00:04:00 -04:00
Daniel Micay
e7eeb3f35c avoid reading thread_local more than once 2019-04-14 20:26:14 -04:00
Daniel Micay
7e465c621e use allocate_large directly in large remap path 2019-04-14 19:46:22 -04:00
Daniel Micay
1c899657c1 add is_init check to mallinfo functions 2019-04-14 19:12:38 -04:00
Daniel Micay
8774065b13 fix non-init size for malloc_object_size extension 2019-04-14 19:01:25 -04:00
Daniel Micay
84a25ec83e fix build with CONFIG_STATS enabled 2019-04-11 00:51:34 -04:00
Daniel Micay
d4b8fee1c4 allow using the largest slab allocation size 2019-04-10 16:54:58 -04:00