mirror of
https://github.com/GrapheneOS/hardened_malloc.git
synced 2024-12-20 21:34:23 -05:00
expand design documentation further
This commit is contained in:
parent
5b3d59ec7d
commit
0e4ea0090b
44
README.md
44
README.md
@ -283,21 +283,41 @@ exclusive to 64-bit platforms in order to take full advantage of the abundant
|
|||||||
address space without being constrained by needing to keep the design
|
address space without being constrained by needing to keep the design
|
||||||
compatible with 32-bit.
|
compatible with 32-bit.
|
||||||
|
|
||||||
|
The mutable allocator state is entirely located within a dedicated metadata
|
||||||
|
region, and the allocator is designed around this approach for both small
|
||||||
|
(slab) allocations and large allocations. This provides reliable, deterministic
|
||||||
|
protections against invalid free including double frees, and protects metadata
|
||||||
|
from attackers. Traditional allocator exploitation techniques do not work with
|
||||||
|
the hardened\_malloc implementation.
|
||||||
|
|
||||||
Small allocations are always located in a large memory region reserved for slab
|
Small allocations are always located in a large memory region reserved for slab
|
||||||
allocations. It can be determined that an allocation is one of the small size
|
allocations. On free, it can be determined that an allocation is one of the
|
||||||
classes from the address range. Each small size class has a separate reserved
|
small size classes from the address range. If arenas are enabled, the arena is
|
||||||
region within the larger region, and the size of a small allocation can simply
|
also determined from the address range as each arena has a dedicated sub-region
|
||||||
be determined from the range. Each small size class has a separate out-of-line
|
in the slab allocation region. Arenas provide totally independent slab
|
||||||
metadata array outside of the overall allocation region, with the index of the
|
allocators with their own allocator state and no coordination between them.
|
||||||
metadata struct within the array mapping to the index of the slab within the
|
Once the base region is determined (simply the slab allocation region as a
|
||||||
dedicated size class region. Slabs are a multiple of the page size and are
|
whole without any arenas enabled), the size class is determined from the
|
||||||
page aligned. The entire small size class region starts out memory protected
|
address range too, since it's divided up into a sub-region for each size class.
|
||||||
and becomes readable / writable as it gets allocated, with idle slabs beyond
|
There's a top level slab allocation region, divided up into arenas, with each
|
||||||
the cache limit having their pages dropped and the memory protected again.
|
of those divided up into size class regions. The size class regions each have a
|
||||||
|
random base within a large guard region. Once the size class is determined, the
|
||||||
|
slab size is known, and the index of the slab is calculated and used to obtain
|
||||||
|
the slab metadata for the slab from the slab metadata array. Finally, the index
|
||||||
|
of the slot within the slab provides the index of the bit tracking the slot in
|
||||||
|
the bitmap. Every slab allocation slot has a dedicated bit in a bitmap tracking
|
||||||
|
whether it's free, along with a separate bitmap for tracking allocations in the
|
||||||
|
quarantine. The slab metadata entries in the array have intrusive lists
|
||||||
|
threaded through them to track partial slabs (partially filled, and these are
|
||||||
|
the first choice for allocation), empty slabs (limited amount of cached free
|
||||||
|
memory) and free slabs (purged / memory protected).
|
||||||
|
|
||||||
Large allocations are tracked via a global hash table mapping their address to
|
Large allocations are tracked via a global hash table mapping their address to
|
||||||
their size and guard size. They're simply memory mappings and get mapped on
|
their size and random guard size. They're simply memory mappings and get mapped
|
||||||
allocation and then unmapped on free.
|
on allocation and then unmapped on free. Large allocations are the only dynamic
|
||||||
|
memory mappings made by the allocator, since the address space for allocator
|
||||||
|
state (including both small / large allocation metadata) and slab allocations
|
||||||
|
is statically reserved.
|
||||||
|
|
||||||
This allocator is aimed at production usage, not aiding with finding and fixing
|
This allocator is aimed at production usage, not aiding with finding and fixing
|
||||||
memory corruption bugs for software development. It does find many latent bugs
|
memory corruption bugs for software development. It does find many latent bugs
|
||||||
|
Loading…
Reference in New Issue
Block a user