Struct mmtk::util::alloc::free_list_allocator::FreeListAllocator
source · #[repr(C)]pub struct FreeListAllocator<VM: VMBinding> {
pub tls: VMThread,
space: &'static MarkSweepSpace<VM>,
context: Arc<AllocatorContext<VM>>,
pub available_blocks: Box<[BlockList; 49]>,
pub available_blocks_stress: Box<[BlockList; 49]>,
pub unswept_blocks: Box<[BlockList; 49]>,
pub consumed_blocks: Box<[BlockList; 49]>,
}
Expand description
A MiMalloc free list allocator
Fields§
§tls: VMThread
VMThread
associated with this allocator instance
space: &'static MarkSweepSpace<VM>
§context: Arc<AllocatorContext<VM>>
§available_blocks: Box<[BlockList; 49]>
blocks with free space
available_blocks_stress: Box<[BlockList; 49]>
blocks with free space for precise stress GC For precise stress GC, we need to be able to trigger slowpath allocation for each allocation. To achieve this, we put available blocks to this list. So normal fastpath allocation will fail, as they will see the block lists as empty.
unswept_blocks: Box<[BlockList; 49]>
blocks that are marked, not swept
consumed_blocks: Box<[BlockList; 49]>
full blocks
Implementations§
source§impl<VM: VMBinding> FreeListAllocator<VM>
impl<VM: VMBinding> FreeListAllocator<VM>
pub(crate) fn new( tls: VMThread, space: &'static MarkSweepSpace<VM>, context: Arc<AllocatorContext<VM>> ) -> Self
fn block_alloc(&mut self, block: Block) -> Address
fn find_free_block_stress(&mut self, size: usize, align: usize) -> Option<Block>
fn find_free_block_local(&mut self, size: usize, align: usize) -> Option<Block>
fn find_free_block_with( available_blocks: &mut Box<[BlockList; 49]>, consumed_blocks: &mut Box<[BlockList; 49]>, size: usize, align: usize ) -> Option<Block>
sourcefn add_to_available_blocks(&mut self, bin: usize, block: Block, stress: bool)
fn add_to_available_blocks(&mut self, bin: usize, block: Block, stress: bool)
Add a block to the given bin in the available block lists. Depending on which available block list we are using, this method may add the block to available_blocks, or available_blocks_stress.
sourcefn recycle_local_blocks(
&mut self,
size: usize,
align: usize,
_stress_test: bool
) -> Option<Block>
fn recycle_local_blocks( &mut self, size: usize, align: usize, _stress_test: bool ) -> Option<Block>
Tries to recycle local blocks if there is any. This is a no-op for eager sweeping mark sweep.
sourcefn acquire_global_block(
&mut self,
size: usize,
align: usize,
stress_test: bool
) -> Option<Block>
fn acquire_global_block( &mut self, size: usize, align: usize, stress_test: bool ) -> Option<Block>
Get a block from the space.
fn init_block(&self, block: Block, cell_size: usize)
fn store_block_tls(&self, block: Block)
pub(crate) fn prepare(&mut self)
pub(crate) fn release(&mut self)
sourceconst ABANDON_BLOCKS_IN_RESET: bool = true
const ABANDON_BLOCKS_IN_RESET: bool = true
Do we abandon allocator local blocks in reset? We should do this for GC. Otherwise, blocks will be held by each allocator, and they cannot be reused by other allocators. This is measured to cause up to 100% increase of the min heap size for mark sweep.
sourcefn reset(&mut self)
fn reset(&mut self)
Eager sweeping. We sweep all the block lists, and move them to available block lists.
fn abandon_blocks(&mut self, global: &mut AbandonedBlockLists)
Trait Implementations§
source§impl<VM: VMBinding> Allocator<VM> for FreeListAllocator<VM>
impl<VM: VMBinding> Allocator<VM> for FreeListAllocator<VM>
source§fn get_space(&self) -> &'static dyn Space<VM>
fn get_space(&self) -> &'static dyn Space<VM>
Space
instance associated with this allocator instance.source§fn get_context(&self) -> &AllocatorContext<VM>
fn get_context(&self) -> &AllocatorContext<VM>
source§fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
alloc_slow
. Read moresource§fn alloc_slow_once(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address
alloc_slow_inline
. The
implementation of this function depends on the allocator used. Generally, if an allocator
supports thread local allocations, it will try to allocate more TLAB space here. If it
doesn’t, then (generally) the allocator simply allocates enough space for the current
object. Read moresource§fn does_thread_local_allocation(&self) -> bool
fn does_thread_local_allocation(&self) -> bool
source§fn get_thread_local_buffer_granularity(&self) -> usize
fn get_thread_local_buffer_granularity(&self) -> usize
BumpAllocator
acquires memory at 32KB
blocks. Depending on the actual size for the current object, they always acquire memory of
N*32KB (N>=1). Thus the BumpAllocator
returns 32KB for this method. Only allocators
that do thread local allocation need to implement this method.source§fn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address
fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address
source§fn on_mutator_destroy(&mut self)
fn on_mutator_destroy(&mut self)
crate::plan::Mutator
that includes this allocator is going to be destroyed. Some allocators
may need to save/transfer its thread local data to the space.source§fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
source§fn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address
alloc_slow_once
. This function also accounts for increasing the
allocation bytes in order to support stress testing. In case precise stress testing is
being used, the alloc_slow_once_precise_stress
function is used instead. Read moresource§fn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address
alloc_slow_once
to insert USDT tracepoints. Read moreAuto Trait Implementations§
impl<VM> !RefUnwindSafe for FreeListAllocator<VM>
impl<VM> Send for FreeListAllocator<VM>
impl<VM> Sync for FreeListAllocator<VM>
impl<VM> Unpin for FreeListAllocator<VM>
impl<VM> !UnwindSafe for FreeListAllocator<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more