Struct mmtk::util::alloc::free_list_allocator::FreeListAllocator
source · #[repr(C)]pub struct FreeListAllocator<VM: VMBinding> {
pub tls: VMThread,
space: &'static MarkSweepSpace<VM>,
context: Arc<AllocatorContext<VM>>,
pub available_blocks: Box<[BlockList; 49]>,
pub available_blocks_stress: Box<[BlockList; 49]>,
pub unswept_blocks: Box<[BlockList; 49]>,
pub consumed_blocks: Box<[BlockList; 49]>,
}
Expand description
A MiMalloc free list allocator
Fields§
§tls: VMThread
VMThread
associated with this allocator instance
space: &'static MarkSweepSpace<VM>
§context: Arc<AllocatorContext<VM>>
§available_blocks: Box<[BlockList; 49]>
blocks with free space
available_blocks_stress: Box<[BlockList; 49]>
blocks with free space for precise stress GC For precise stress GC, we need to be able to trigger slowpath allocation for each allocation. To achieve this, we put available blocks to this list. So normal fastpath allocation will fail, as they will see the block lists as empty.
unswept_blocks: Box<[BlockList; 49]>
blocks that are marked, not swept
consumed_blocks: Box<[BlockList; 49]>
full blocks
Implementations§
source§impl<VM: VMBinding> FreeListAllocator<VM>
impl<VM: VMBinding> FreeListAllocator<VM>
pub(crate) fn new( tls: VMThread, space: &'static MarkSweepSpace<VM>, context: Arc<AllocatorContext<VM>> ) -> Self
fn block_alloc(&mut self, block: Block) -> Address
fn find_free_block_stress(&mut self, size: usize, align: usize) -> Option<Block>
fn find_free_block_local(&mut self, size: usize, align: usize) -> Option<Block>
fn find_free_block_with( available_blocks: &mut Box<[BlockList; 49]>, consumed_blocks: &mut Box<[BlockList; 49]>, size: usize, align: usize ) -> Option<Block>
sourcefn add_to_available_blocks(&mut self, bin: usize, block: Block, stress: bool)
fn add_to_available_blocks(&mut self, bin: usize, block: Block, stress: bool)
Add a block to the given bin in the available block lists. Depending on which available block list we are using, this method may add the block to available_blocks, or available_blocks_stress.
sourcefn recycle_local_blocks(
&mut self,
size: usize,
align: usize,
_stress_test: bool
) -> Option<Block>
fn recycle_local_blocks( &mut self, size: usize, align: usize, _stress_test: bool ) -> Option<Block>
Tries to recycle local blocks if there is any. This is a no-op for eager sweeping mark sweep.
sourcefn acquire_global_block(
&mut self,
size: usize,
align: usize,
stress_test: bool
) -> Option<Block>
fn acquire_global_block( &mut self, size: usize, align: usize, stress_test: bool ) -> Option<Block>
Get a block from the space.
fn init_block(&self, block: Block, cell_size: usize)
fn store_block_tls(&self, block: Block)
pub(crate) fn prepare(&mut self)
pub(crate) fn release(&mut self)
fn abandon_blocks(&mut self, global: &mut AbandonedBlockLists)
Trait Implementations§
source§impl<VM: VMBinding> Allocator<VM> for FreeListAllocator<VM>
impl<VM: VMBinding> Allocator<VM> for FreeListAllocator<VM>
source§fn get_space(&self) -> &'static dyn Space<VM>
fn get_space(&self) -> &'static dyn Space<VM>
Return the
Space
instance associated with this allocator instance.source§fn get_context(&self) -> &AllocatorContext<VM>
fn get_context(&self) -> &AllocatorContext<VM>
Return the context for the allocator.
source§fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
An allocation attempt. The implementation of this function depends on the allocator used.
If an allocator supports thread local allocations, then the allocation will be serviced
from its TLAB, otherwise it will default to using the slowpath, i.e.
alloc_slow
. Read moresource§fn alloc_slow_once(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address
Single slow path allocation attempt. This is called by
alloc_slow_inline
. The
implementation of this function depends on the allocator used. Generally, if an allocator
supports thread local allocations, it will try to allocate more TLAB space here. If it
doesn’t, then (generally) the allocator simply allocates enough space for the current
object. Read moresource§fn does_thread_local_allocation(&self) -> bool
fn does_thread_local_allocation(&self) -> bool
Return if this allocator can do thread local allocation. If an allocator does not do thread
local allocation, each allocation will go to slowpath and will have a check for GC polls.
source§fn get_thread_local_buffer_granularity(&self) -> usize
fn get_thread_local_buffer_granularity(&self) -> usize
Return at which granularity the allocator acquires memory from the global space and use
them as thread local buffer. For example, the
BumpAllocator
acquires memory at 32KB
blocks. Depending on the actual size for the current object, they always acquire memory of
N*32KB (N>=1). Thus the BumpAllocator
returns 32KB for this method. Only allocators
that do thread local allocation need to implement this method.source§fn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address
fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address
Single slowpath allocation attempt for stress test. When the stress factor is set (e.g. to
N), we would expect for every N bytes allocated, we will trigger a stress GC. However, for
allocators that do thread local allocation, they may allocate from their thread local
buffer which does not have a GC poll check, and they may even allocate with the JIT
generated allocation fastpath which is unaware of stress test GC. For both cases, we are
not able to guarantee a stress GC is triggered every N bytes. To solve this, when the
stress factor is set, we will call this method instead of the normal alloc_slow_once(). We
expect the implementation of this slow allocation will trick the fastpath so every
allocation will fail in the fastpath, jump to the slow path and eventually call this method
again for the actual allocation. Read more
source§fn on_mutator_destroy(&mut self)
fn on_mutator_destroy(&mut self)
The
crate::plan::Mutator
that includes this allocator is going to be destroyed. Some allocators
may need to save/transfer its thread local data to the space.source§fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
Slowpath allocation attempt. This function is explicitly not inlined for performance
considerations. Read more
source§fn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address
Slowpath allocation attempt. This function executes the actual slowpath allocation. A
slowpath allocation in MMTk attempts to allocate the object using the per-allocator
definition of
alloc_slow_once
. This function also accounts for increasing the
allocation bytes in order to support stress testing. In case precise stress testing is
being used, the alloc_slow_once_precise_stress
function is used instead. Read moresource§fn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address
A wrapper method for
alloc_slow_once
to insert USDT tracepoints. Read moreAuto Trait Implementations§
impl<VM> !RefUnwindSafe for FreeListAllocator<VM>
impl<VM> Send for FreeListAllocator<VM>
impl<VM> Sync for FreeListAllocator<VM>
impl<VM> Unpin for FreeListAllocator<VM>
impl<VM> !UnwindSafe for FreeListAllocator<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more