#[repr(C)]
pub struct ImmixAllocator<VM: VMBinding> { pub tls: VMThread, pub bump_pointer: BumpPointer, space: &'static ImmixSpace<VM>, context: Arc<AllocatorContext<VM>>, hot: bool, copy: bool, pub(super) large_bump_pointer: BumpPointer, request_for_large: bool, line: Option<Line>, }
Expand description

Immix allocator

Fields§

§tls: VMThread

VMThread associated with this allocator instance

§bump_pointer: BumpPointer

The fastpath bump pointer.

§space: &'static ImmixSpace<VM>

Space instance associated with this allocator instance.

§context: Arc<AllocatorContext<VM>>§hot: bool

unused

§copy: bool

Is this a copy allocator?

§large_bump_pointer: BumpPointer

Bump pointer for large objects

§request_for_large: bool

Is the current request for large or small?

§line: Option<Line>

Hole-searching cursor

Implementations§

source§

impl<VM: VMBinding> ImmixAllocator<VM>

source

pub(crate) fn reset(&mut self)

source§

impl<VM: VMBinding> ImmixAllocator<VM>

source

pub(crate) fn new( tls: VMThread, space: Option<&'static dyn Space<VM>>, context: Arc<AllocatorContext<VM>>, copy: bool ) -> Self

source

pub(crate) fn immix_space(&self) -> &'static ImmixSpace<VM>

source

fn overflow_alloc( &mut self, size: usize, align: usize, offset: usize ) -> Address

Large-object (larger than a line) bump allocation.

source

fn alloc_slow_hot( &mut self, size: usize, align: usize, offset: usize ) -> Address

Bump allocate small objects into recyclable lines (i.e. holes).

source

fn acquire_recyclable_lines( &mut self, size: usize, align: usize, offset: usize ) -> bool

Search for recyclable lines.

source

fn acquire_recyclable_block(&mut self) -> bool

Get a recyclable block from ImmixSpace.

source

fn acquire_clean_block( &mut self, size: usize, align: usize, offset: usize ) -> Address

source

fn require_new_block( &mut self, size: usize, align: usize, offset: usize ) -> bool

Return whether the TLAB has been exhausted and we need to acquire a new block. Assumes that the buffer limits have been restored using ImmixAllocator::restore_limit_for_stress. Note that this function may implicitly change the limits of the allocator.

source

fn set_limit_for_stress(&mut self)

Set fake limits for the bump allocation for stress tests. The fake limit is the remaining thread local buffer size, which should be always smaller than the bump cursor. This method may be reentrant. We need to check before setting the values.

source

fn restore_limit_for_stress(&mut self)

Restore the real limits for the bump allocation so we can properly do a thread local allocation. The fake limit is the remaining thread local buffer size, and we restore the actual limit from the size and the cursor. This method may be reentrant. We need to check before setting the values.

Trait Implementations§

source§

impl<VM: VMBinding> Allocator<VM> for ImmixAllocator<VM>

source§

fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address

Acquire a clean block from ImmixSpace for allocation.

source§

fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address

This is called when precise stress is used. We try use the thread local buffer for the allocation (after restoring the correct limit for thread local buffer). If we cannot allocate from thread local buffer, we will go to the actual slowpath. After allocation, we will set the fake limit so future allocations will fail the slowpath and get here as well.

source§

fn get_space(&self) -> &'static dyn Space<VM>

Return the Space instance associated with this allocator instance.
source§

fn get_context(&self) -> &AllocatorContext<VM>

Return the context for the allocator.
source§

fn does_thread_local_allocation(&self) -> bool

Return if this allocator can do thread local allocation. If an allocator does not do thread local allocation, each allocation will go to slowpath and will have a check for GC polls.
source§

fn get_thread_local_buffer_granularity(&self) -> usize

Return at which granularity the allocator acquires memory from the global space and use them as thread local buffer. For example, the BumpAllocator acquires memory at 32KB blocks. Depending on the actual size for the current object, they always acquire memory of N*32KB (N>=1). Thus the BumpAllocator returns 32KB for this method. Only allocators that do thread local allocation need to implement this method.
source§

fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address

An allocation attempt. The implementation of this function depends on the allocator used. If an allocator supports thread local allocations, then the allocation will be serviced from its TLAB, otherwise it will default to using the slowpath, i.e. alloc_slow. Read more
source§

fn get_tls(&self) -> VMThread

Return the VMThread associated with this allocator instance.
source§

fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address

Slowpath allocation attempt. This function is explicitly not inlined for performance considerations. Read more
source§

fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address

Slowpath allocation attempt. This function executes the actual slowpath allocation. A slowpath allocation in MMTk attempts to allocate the object using the per-allocator definition of alloc_slow_once. This function also accounts for increasing the allocation bytes in order to support stress testing. In case precise stress testing is being used, the alloc_slow_once_precise_stress function is used instead. Read more
source§

fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address

A wrapper method for alloc_slow_once to insert USDT tracepoints. Read more
source§

fn on_mutator_destroy(&mut self)

The crate::plan::Mutator that includes this allocator is going to be destroyed. Some allocators may need to save/transfer its thread local data to the space.

Auto Trait Implementations§

§

impl<VM> !RefUnwindSafe for ImmixAllocator<VM>

§

impl<VM> Send for ImmixAllocator<VM>

§

impl<VM> Sync for ImmixAllocator<VM>

§

impl<VM> Unpin for ImmixAllocator<VM>

§

impl<VM> !UnwindSafe for ImmixAllocator<VM>

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> Downcast for T
where T: Any,

§

fn into_any(self: Box<T>) -> Box<dyn Any>

Convert Box<dyn Trait> (where Trait: Downcast) to Box<dyn Any>. Box<dyn Any> can then be further downcast into Box<ConcreteType> where ConcreteType implements Trait.
§

fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>

Convert Rc<Trait> (where Trait: Downcast) to Rc<Any>. Rc<Any> can then be further downcast into Rc<ConcreteType> where ConcreteType implements Trait.
§

fn as_any(&self) -> &(dyn Any + 'static)

Convert &Trait (where Trait: Downcast) to &Any. This is needed since Rust cannot generate &Any’s vtable from &Trait’s.
§

fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)

Convert &mut Trait (where Trait: Downcast) to &Any. This is needed since Rust cannot generate &mut Any’s vtable from &mut Trait’s.
§

impl<T> DowncastSync for T
where T: Any + Send + Sync,

§

fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync>

Convert Arc<Trait> (where Trait: Downcast) to Arc<Any>. Arc<Any> can then be further downcast into Arc<ConcreteType> where ConcreteType implements Trait.
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
§

impl<T> Pointable for T

§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.