Struct mmtk::util::alloc::large_object_allocator::LargeObjectAllocator
source · #[repr(C)]pub struct LargeObjectAllocator<VM: VMBinding> {
pub tls: VMThread,
space: &'static LargeObjectSpace<VM>,
context: Arc<AllocatorContext<VM>>,
}
Expand description
An allocator that only allocates at page granularity. This is intended for large objects.
Fields§
§tls: VMThread
VMThread
associated with this allocator instance
space: &'static LargeObjectSpace<VM>
Space
instance associated with this allocator instance.
context: Arc<AllocatorContext<VM>>
Implementations§
source§impl<VM: VMBinding> LargeObjectAllocator<VM>
impl<VM: VMBinding> LargeObjectAllocator<VM>
pub(crate) fn new( tls: VMThread, space: &'static LargeObjectSpace<VM>, context: Arc<AllocatorContext<VM>> ) -> Self
Trait Implementations§
source§impl<VM: VMBinding> Allocator<VM> for LargeObjectAllocator<VM>
impl<VM: VMBinding> Allocator<VM> for LargeObjectAllocator<VM>
source§fn get_context(&self) -> &AllocatorContext<VM>
fn get_context(&self) -> &AllocatorContext<VM>
Return the context for the allocator.
source§fn get_space(&self) -> &'static dyn Space<VM>
fn get_space(&self) -> &'static dyn Space<VM>
Return the
Space
instance associated with this allocator instance.source§fn does_thread_local_allocation(&self) -> bool
fn does_thread_local_allocation(&self) -> bool
Return if this allocator can do thread local allocation. If an allocator does not do thread
local allocation, each allocation will go to slowpath and will have a check for GC polls.
source§fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
An allocation attempt. The implementation of this function depends on the allocator used.
If an allocator supports thread local allocations, then the allocation will be serviced
from its TLAB, otherwise it will default to using the slowpath, i.e.
alloc_slow
. Read moresource§fn alloc_slow_once(
&mut self,
size: usize,
align: usize,
_offset: usize
) -> Address
fn alloc_slow_once( &mut self, size: usize, align: usize, _offset: usize ) -> Address
Single slow path allocation attempt. This is called by
alloc_slow_inline
. The
implementation of this function depends on the allocator used. Generally, if an allocator
supports thread local allocations, it will try to allocate more TLAB space here. If it
doesn’t, then (generally) the allocator simply allocates enough space for the current
object. Read moresource§fn get_thread_local_buffer_granularity(&self) -> usize
fn get_thread_local_buffer_granularity(&self) -> usize
Return at which granularity the allocator acquires memory from the global space and use
them as thread local buffer. For example, the
BumpAllocator
acquires memory at 32KB
blocks. Depending on the actual size for the current object, they always acquire memory of
N*32KB (N>=1). Thus the BumpAllocator
returns 32KB for this method. Only allocators
that do thread local allocation need to implement this method.source§fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
Slowpath allocation attempt. This function is explicitly not inlined for performance
considerations. Read more
source§fn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address
Slowpath allocation attempt. This function executes the actual slowpath allocation. A
slowpath allocation in MMTk attempts to allocate the object using the per-allocator
definition of
alloc_slow_once
. This function also accounts for increasing the
allocation bytes in order to support stress testing. In case precise stress testing is
being used, the alloc_slow_once_precise_stress
function is used instead. Read moresource§fn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address
A wrapper method for
alloc_slow_once
to insert USDT tracepoints. Read moresource§fn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address
fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address
Single slowpath allocation attempt for stress test. When the stress factor is set (e.g. to
N), we would expect for every N bytes allocated, we will trigger a stress GC. However, for
allocators that do thread local allocation, they may allocate from their thread local
buffer which does not have a GC poll check, and they may even allocate with the JIT
generated allocation fastpath which is unaware of stress test GC. For both cases, we are
not able to guarantee a stress GC is triggered every N bytes. To solve this, when the
stress factor is set, we will call this method instead of the normal alloc_slow_once(). We
expect the implementation of this slow allocation will trick the fastpath so every
allocation will fail in the fastpath, jump to the slow path and eventually call this method
again for the actual allocation. Read more
source§fn on_mutator_destroy(&mut self)
fn on_mutator_destroy(&mut self)
The
crate::plan::Mutator
that includes this allocator is going to be destroyed. Some allocators
may need to save/transfer its thread local data to the space.Auto Trait Implementations§
impl<VM> !RefUnwindSafe for LargeObjectAllocator<VM>
impl<VM> Send for LargeObjectAllocator<VM>
impl<VM> Sync for LargeObjectAllocator<VM>
impl<VM> Unpin for LargeObjectAllocator<VM>
impl<VM> !UnwindSafe for LargeObjectAllocator<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more