Struct mmtk::util::alloc::MarkCompactAllocator
source · #[repr(C)]pub struct MarkCompactAllocator<VM: VMBinding> {
pub(super) bump_allocator: BumpAllocator<VM>,
}
Expand description
A thin wrapper(specific implementation) of bump allocator reserve extra bytes when allocating
Fields§
§bump_allocator: BumpAllocator<VM>
Implementations§
source§impl<VM: VMBinding> MarkCompactAllocator<VM>
impl<VM: VMBinding> MarkCompactAllocator<VM>
source§impl<VM: VMBinding> MarkCompactAllocator<VM>
impl<VM: VMBinding> MarkCompactAllocator<VM>
sourcepub const HEADER_RESERVED_IN_BYTES: usize = crate::policy::markcompactspace::MarkCompactSpace<VM>::HEADER_RESERVED_IN_BYTES
pub const HEADER_RESERVED_IN_BYTES: usize = crate::policy::markcompactspace::MarkCompactSpace<VM>::HEADER_RESERVED_IN_BYTES
The number of bytes that the allocator reserves for its own header.
pub(crate) fn new( tls: VMThread, space: &'static dyn Space<VM>, context: Arc<AllocatorContext<VM>> ) -> Self
Trait Implementations§
source§impl<VM: VMBinding> Allocator<VM> for MarkCompactAllocator<VM>
impl<VM: VMBinding> Allocator<VM> for MarkCompactAllocator<VM>
source§fn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address
fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address
Slow path for allocation if precise stress testing has been enabled. It works by manipulating the limit to be always below the cursor. Can have three different cases:
- acquires a new block if the hard limit has been met;
- allocates an object using the bump pointer semantics from the fastpath if there is sufficient space; and
- does not allocate an object but forces a poll for GC if the stress factor has been crossed.
source§fn get_space(&self) -> &'static dyn Space<VM>
fn get_space(&self) -> &'static dyn Space<VM>
Return the
Space
instance associated with this allocator instance.source§fn get_context(&self) -> &AllocatorContext<VM>
fn get_context(&self) -> &AllocatorContext<VM>
Return the context for the allocator.
source§fn does_thread_local_allocation(&self) -> bool
fn does_thread_local_allocation(&self) -> bool
Return if this allocator can do thread local allocation. If an allocator does not do thread
local allocation, each allocation will go to slowpath and will have a check for GC polls.
source§fn get_thread_local_buffer_granularity(&self) -> usize
fn get_thread_local_buffer_granularity(&self) -> usize
Return at which granularity the allocator acquires memory from the global space and use
them as thread local buffer. For example, the
BumpAllocator
acquires memory at 32KB
blocks. Depending on the actual size for the current object, they always acquire memory of
N*32KB (N>=1). Thus the BumpAllocator
returns 32KB for this method. Only allocators
that do thread local allocation need to implement this method.source§fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
An allocation attempt. The implementation of this function depends on the allocator used.
If an allocator supports thread local allocations, then the allocation will be serviced
from its TLAB, otherwise it will default to using the slowpath, i.e.
alloc_slow
. Read moresource§fn alloc_slow_once(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address
Single slow path allocation attempt. This is called by
alloc_slow_inline
. The
implementation of this function depends on the allocator used. Generally, if an allocator
supports thread local allocations, it will try to allocate more TLAB space here. If it
doesn’t, then (generally) the allocator simply allocates enough space for the current
object. Read moresource§fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
Slowpath allocation attempt. This function is explicitly not inlined for performance
considerations. Read more
source§fn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address
Slowpath allocation attempt. This function executes the actual slowpath allocation. A
slowpath allocation in MMTk attempts to allocate the object using the per-allocator
definition of
alloc_slow_once
. This function also accounts for increasing the
allocation bytes in order to support stress testing. In case precise stress testing is
being used, the alloc_slow_once_precise_stress
function is used instead. Read moresource§fn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address
A wrapper method for
alloc_slow_once
to insert USDT tracepoints. Read moresource§fn on_mutator_destroy(&mut self)
fn on_mutator_destroy(&mut self)
The
crate::plan::Mutator
that includes this allocator is going to be destroyed. Some allocators
may need to save/transfer its thread local data to the space.Auto Trait Implementations§
impl<VM> !RefUnwindSafe for MarkCompactAllocator<VM>
impl<VM> Send for MarkCompactAllocator<VM>
impl<VM> Sync for MarkCompactAllocator<VM>
impl<VM> Unpin for MarkCompactAllocator<VM>
impl<VM> !UnwindSafe for MarkCompactAllocator<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more