Struct mmtk::util::alloc::immix_allocator::ImmixAllocator
source · #[repr(C)]pub struct ImmixAllocator<VM: VMBinding> {
pub tls: VMThread,
pub bump_pointer: BumpPointer,
space: &'static ImmixSpace<VM>,
context: Arc<AllocatorContext<VM>>,
hot: bool,
copy: bool,
pub(super) large_bump_pointer: BumpPointer,
request_for_large: bool,
line: Option<Line>,
}
Expand description
Immix allocator
Fields§
§tls: VMThread
VMThread
associated with this allocator instance
bump_pointer: BumpPointer
The fastpath bump pointer.
space: &'static ImmixSpace<VM>
Space
instance associated with this allocator instance.
context: Arc<AllocatorContext<VM>>
§hot: bool
unused
copy: bool
Is this a copy allocator?
large_bump_pointer: BumpPointer
Bump pointer for large objects
request_for_large: bool
Is the current request for large or small?
line: Option<Line>
Hole-searching cursor
Implementations§
source§impl<VM: VMBinding> ImmixAllocator<VM>
impl<VM: VMBinding> ImmixAllocator<VM>
pub(crate) fn new( tls: VMThread, space: Option<&'static dyn Space<VM>>, context: Arc<AllocatorContext<VM>>, copy: bool ) -> Self
pub(crate) fn immix_space(&self) -> &'static ImmixSpace<VM>
sourcefn overflow_alloc(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn overflow_alloc( &mut self, size: usize, align: usize, offset: usize ) -> Address
Large-object (larger than a line) bump allocation.
sourcefn alloc_slow_hot(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_hot( &mut self, size: usize, align: usize, offset: usize ) -> Address
Bump allocate small objects into recyclable lines (i.e. holes).
sourcefn acquire_recyclable_lines(
&mut self,
size: usize,
align: usize,
offset: usize
) -> bool
fn acquire_recyclable_lines( &mut self, size: usize, align: usize, offset: usize ) -> bool
Search for recyclable lines.
sourcefn acquire_recyclable_block(&mut self) -> bool
fn acquire_recyclable_block(&mut self) -> bool
Get a recyclable block from ImmixSpace.
fn acquire_clean_block( &mut self, size: usize, align: usize, offset: usize ) -> Address
sourcefn require_new_block(
&mut self,
size: usize,
align: usize,
offset: usize
) -> bool
fn require_new_block( &mut self, size: usize, align: usize, offset: usize ) -> bool
Return whether the TLAB has been exhausted and we need to acquire a new block. Assumes that
the buffer limits have been restored using ImmixAllocator::restore_limit_for_stress
.
Note that this function may implicitly change the limits of the allocator.
sourcefn set_limit_for_stress(&mut self)
fn set_limit_for_stress(&mut self)
Set fake limits for the bump allocation for stress tests. The fake limit is the remaining thread local buffer size, which should be always smaller than the bump cursor. This method may be reentrant. We need to check before setting the values.
sourcefn restore_limit_for_stress(&mut self)
fn restore_limit_for_stress(&mut self)
Restore the real limits for the bump allocation so we can properly do a thread local allocation. The fake limit is the remaining thread local buffer size, and we restore the actual limit from the size and the cursor. This method may be reentrant. We need to check before setting the values.
Trait Implementations§
source§impl<VM: VMBinding> Allocator<VM> for ImmixAllocator<VM>
impl<VM: VMBinding> Allocator<VM> for ImmixAllocator<VM>
source§fn alloc_slow_once(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address
Acquire a clean block from ImmixSpace for allocation.
source§fn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address
fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address
This is called when precise stress is used. We try use the thread local buffer for the allocation (after restoring the correct limit for thread local buffer). If we cannot allocate from thread local buffer, we will go to the actual slowpath. After allocation, we will set the fake limit so future allocations will fail the slowpath and get here as well.
source§fn get_space(&self) -> &'static dyn Space<VM>
fn get_space(&self) -> &'static dyn Space<VM>
Space
instance associated with this allocator instance.source§fn get_context(&self) -> &AllocatorContext<VM>
fn get_context(&self) -> &AllocatorContext<VM>
source§fn does_thread_local_allocation(&self) -> bool
fn does_thread_local_allocation(&self) -> bool
source§fn get_thread_local_buffer_granularity(&self) -> usize
fn get_thread_local_buffer_granularity(&self) -> usize
BumpAllocator
acquires memory at 32KB
blocks. Depending on the actual size for the current object, they always acquire memory of
N*32KB (N>=1). Thus the BumpAllocator
returns 32KB for this method. Only allocators
that do thread local allocation need to implement this method.source§fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
alloc_slow
. Read moresource§fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
source§fn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address
alloc_slow_once
. This function also accounts for increasing the
allocation bytes in order to support stress testing. In case precise stress testing is
being used, the alloc_slow_once_precise_stress
function is used instead. Read moresource§fn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address
alloc_slow_once
to insert USDT tracepoints. Read moresource§fn on_mutator_destroy(&mut self)
fn on_mutator_destroy(&mut self)
crate::plan::Mutator
that includes this allocator is going to be destroyed. Some allocators
may need to save/transfer its thread local data to the space.Auto Trait Implementations§
impl<VM> !RefUnwindSafe for ImmixAllocator<VM>
impl<VM> Send for ImmixAllocator<VM>
impl<VM> Sync for ImmixAllocator<VM>
impl<VM> Unpin for ImmixAllocator<VM>
impl<VM> !UnwindSafe for ImmixAllocator<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more