pub trait Allocator<VM: VMBinding>: Downcast {
// Required methods
fn get_tls(&self) -> VMThread;
fn get_space(&self) -> &'static dyn Space<VM>;
fn get_context(&self) -> &AllocatorContext<VM>;
fn does_thread_local_allocation(&self) -> bool;
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address;
fn alloc_slow_once(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address;
// Provided methods
fn get_thread_local_buffer_granularity(&self) -> usize { ... }
fn alloc_slow(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address { ... }
fn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address { ... }
fn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address { ... }
fn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address { ... }
fn on_mutator_destroy(&mut self) { ... }
}
Expand description
A trait which implements allocation routines. Every allocator needs to implements this trait.
Required Methods§
sourcefn get_space(&self) -> &'static dyn Space<VM>
fn get_space(&self) -> &'static dyn Space<VM>
Return the Space
instance associated with this allocator instance.
sourcefn get_context(&self) -> &AllocatorContext<VM>
fn get_context(&self) -> &AllocatorContext<VM>
Return the context for the allocator.
sourcefn does_thread_local_allocation(&self) -> bool
fn does_thread_local_allocation(&self) -> bool
Return if this allocator can do thread local allocation. If an allocator does not do thread local allocation, each allocation will go to slowpath and will have a check for GC polls.
sourcefn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address
An allocation attempt. The implementation of this function depends on the allocator used.
If an allocator supports thread local allocations, then the allocation will be serviced
from its TLAB, otherwise it will default to using the slowpath, i.e. alloc_slow
.
Note that in the case where the VM is out of memory, we invoke
Collection::out_of_memory
to inform the binding and then return a null pointer back to
it. We have no assumptions on whether the VM will continue executing or abort immediately.
An allocator needs to make sure the object reference for the returned address is in the same
chunk as the returned address (so the side metadata and the SFT for an object reference is valid).
See crate::util::alloc::object_ref_guard
.
Arguments:
size
: the allocation size in bytes.align
: the required alignment in bytes.offset
the required offset in bytes.
sourcefn alloc_slow_once(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address
Single slow path allocation attempt. This is called by alloc_slow_inline
. The
implementation of this function depends on the allocator used. Generally, if an allocator
supports thread local allocations, it will try to allocate more TLAB space here. If it
doesn’t, then (generally) the allocator simply allocates enough space for the current
object.
Arguments:
size
: the allocation size in bytes.align
: the required alignment in bytes.offset
the required offset in bytes.
Provided Methods§
sourcefn get_thread_local_buffer_granularity(&self) -> usize
fn get_thread_local_buffer_granularity(&self) -> usize
Return at which granularity the allocator acquires memory from the global space and use
them as thread local buffer. For example, the BumpAllocator
acquires memory at 32KB
blocks. Depending on the actual size for the current object, they always acquire memory of
N*32KB (N>=1). Thus the BumpAllocator
returns 32KB for this method. Only allocators
that do thread local allocation need to implement this method.
sourcefn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address
Slowpath allocation attempt. This function is explicitly not inlined for performance considerations.
Arguments:
size
: the allocation size in bytes.align
: the required alignment in bytes.offset
the required offset in bytes.
sourcefn alloc_slow_inline(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address
Slowpath allocation attempt. This function executes the actual slowpath allocation. A
slowpath allocation in MMTk attempts to allocate the object using the per-allocator
definition of alloc_slow_once
. This function also accounts for increasing the
allocation bytes in order to support stress testing. In case precise stress testing is
being used, the alloc_slow_once_precise_stress
function is used instead.
Note that in the case where the VM is out of memory, we invoke
Collection::out_of_memory
with a AllocationError::HeapOutOfMemory
error to inform
the binding and then return a null pointer back to it. We have no assumptions on whether
the VM will continue executing or abort immediately on a
AllocationError::HeapOutOfMemory
error.
Arguments:
size
: the allocation size in bytes.align
: the required alignment in bytes.offset
the required offset in bytes.
sourcefn alloc_slow_once_traced(
&mut self,
size: usize,
align: usize,
offset: usize
) -> Address
fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address
A wrapper method for alloc_slow_once
to insert USDT tracepoints.
Arguments:
size
: the allocation size in bytes.align
: the required alignment in bytes.offset
the required offset in bytes.
sourcefn alloc_slow_once_precise_stress(
&mut self,
size: usize,
align: usize,
offset: usize,
need_poll: bool
) -> Address
fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address
Single slowpath allocation attempt for stress test. When the stress factor is set (e.g. to N), we would expect for every N bytes allocated, we will trigger a stress GC. However, for allocators that do thread local allocation, they may allocate from their thread local buffer which does not have a GC poll check, and they may even allocate with the JIT generated allocation fastpath which is unaware of stress test GC. For both cases, we are not able to guarantee a stress GC is triggered every N bytes. To solve this, when the stress factor is set, we will call this method instead of the normal alloc_slow_once(). We expect the implementation of this slow allocation will trick the fastpath so every allocation will fail in the fastpath, jump to the slow path and eventually call this method again for the actual allocation.
The actual implementation about how to trick the fastpath may vary. For example, our bump pointer allocator will set the thread local buffer limit to the buffer size instead of the buffer end address. In this case, every fastpath check (cursor + size < limit) will fail, and jump to this slowpath. In the slowpath, we still allocate from the thread local buffer, and recompute the limit (remaining buffer size).
If an allocator does not do thread local allocation (which returns false for does_thread_local_allocation()), it does not need to override this method. The default implementation will simply call allow_slow_once() and it will work fine for allocators that do not have thread local allocation.
Arguments:
size
: the allocation size in bytes.align
: the required alignment in bytes.offset
the required offset in bytes.need_poll
: if this is true, the implementation must poll for a GC, rather than attempting to allocate from the local buffer.
sourcefn on_mutator_destroy(&mut self)
fn on_mutator_destroy(&mut self)
The crate::plan::Mutator
that includes this allocator is going to be destroyed. Some allocators
may need to save/transfer its thread local data to the space.
Implementations§
source§impl<VM> dyn Allocator<VM>
impl<VM> dyn Allocator<VM>
sourcepub fn is<__T: Allocator<VM>>(&self) -> bool
pub fn is<__T: Allocator<VM>>(&self) -> bool
Returns true if the trait object wraps an object of type __T
.
sourcepub fn downcast<__T: Allocator<VM>>(
self: Box<Self>
) -> Result<Box<__T>, Box<Self>>
pub fn downcast<__T: Allocator<VM>>( self: Box<Self> ) -> Result<Box<__T>, Box<Self>>
Returns a boxed object from a boxed trait object if the underlying object is of type
__T
. Returns the original boxed trait if it isn’t.
sourcepub fn downcast_rc<__T: Allocator<VM>>(
self: Rc<Self>
) -> Result<Rc<__T>, Rc<Self>>
pub fn downcast_rc<__T: Allocator<VM>>( self: Rc<Self> ) -> Result<Rc<__T>, Rc<Self>>
Returns an Rc
-ed object from an Rc
-ed trait object if the underlying object is of
type __T
. Returns the original Rc
-ed trait if it isn’t.
sourcepub fn downcast_ref<__T: Allocator<VM>>(&self) -> Option<&__T>
pub fn downcast_ref<__T: Allocator<VM>>(&self) -> Option<&__T>
Returns a reference to the object within the trait object if it is of type __T
, or
None
if it isn’t.
sourcepub fn downcast_mut<__T: Allocator<VM>>(&mut self) -> Option<&mut __T>
pub fn downcast_mut<__T: Allocator<VM>>(&mut self) -> Option<&mut __T>
Returns a mutable reference to the object within the trait object if it is of type
__T
, or None
if it isn’t.