#[repr(C)]
pub struct FreeListAllocator<VM: VMBinding> { pub tls: VMThread, space: &'static MarkSweepSpace<VM>, context: Arc<AllocatorContext<VM>>, pub available_blocks: Box<[BlockList; 49]>, pub available_blocks_stress: Box<[BlockList; 49]>, pub unswept_blocks: Box<[BlockList; 49]>, pub consumed_blocks: Box<[BlockList; 49]>, }
Expand description

A MiMalloc free list allocator

Fields§

§tls: VMThread

VMThread associated with this allocator instance

§space: &'static MarkSweepSpace<VM>§context: Arc<AllocatorContext<VM>>§available_blocks: Box<[BlockList; 49]>

blocks with free space

§available_blocks_stress: Box<[BlockList; 49]>

blocks with free space for precise stress GC For precise stress GC, we need to be able to trigger slowpath allocation for each allocation. To achieve this, we put available blocks to this list. So normal fastpath allocation will fail, as they will see the block lists as empty.

§unswept_blocks: Box<[BlockList; 49]>

blocks that are marked, not swept

§consumed_blocks: Box<[BlockList; 49]>

full blocks

Implementations§

source§

impl<VM: VMBinding> FreeListAllocator<VM>

source

pub(crate) fn new( tls: VMThread, space: &'static MarkSweepSpace<VM>, context: Arc<AllocatorContext<VM>> ) -> Self

source

fn block_alloc(&mut self, block: Block) -> Address

source

fn find_free_block_stress(&mut self, size: usize, align: usize) -> Option<Block>

source

fn find_free_block_local(&mut self, size: usize, align: usize) -> Option<Block>

source

fn find_free_block_with( available_blocks: &mut Box<[BlockList; 49]>, consumed_blocks: &mut Box<[BlockList; 49]>, size: usize, align: usize ) -> Option<Block>

source

fn add_to_available_blocks(&mut self, bin: usize, block: Block, stress: bool)

Add a block to the given bin in the available block lists. Depending on which available block list we are using, this method may add the block to available_blocks, or available_blocks_stress.

source

fn recycle_local_blocks( &mut self, size: usize, align: usize, _stress_test: bool ) -> Option<Block>

Tries to recycle local blocks if there is any. This is a no-op for eager sweeping mark sweep.

source

fn acquire_global_block( &mut self, size: usize, align: usize, stress_test: bool ) -> Option<Block>

Get a block from the space.

source

fn init_block(&self, block: Block, cell_size: usize)

source

fn store_block_tls(&self, block: Block)

source

pub(crate) fn prepare(&mut self)

source

pub(crate) fn release(&mut self)

source

fn abandon_blocks(&mut self, global: &mut AbandonedBlockLists)

Trait Implementations§

source§

impl<VM: VMBinding> Allocator<VM> for FreeListAllocator<VM>

source§

fn get_tls(&self) -> VMThread

Return the VMThread associated with this allocator instance.
source§

fn get_space(&self) -> &'static dyn Space<VM>

Return the Space instance associated with this allocator instance.
source§

fn get_context(&self) -> &AllocatorContext<VM>

Return the context for the allocator.
source§

fn alloc(&mut self, size: usize, align: usize, offset: usize) -> Address

An allocation attempt. The implementation of this function depends on the allocator used. If an allocator supports thread local allocations, then the allocation will be serviced from its TLAB, otherwise it will default to using the slowpath, i.e. alloc_slow. Read more
source§

fn alloc_slow_once( &mut self, size: usize, align: usize, offset: usize ) -> Address

Single slow path allocation attempt. This is called by alloc_slow_inline. The implementation of this function depends on the allocator used. Generally, if an allocator supports thread local allocations, it will try to allocate more TLAB space here. If it doesn’t, then (generally) the allocator simply allocates enough space for the current object. Read more
source§

fn does_thread_local_allocation(&self) -> bool

Return if this allocator can do thread local allocation. If an allocator does not do thread local allocation, each allocation will go to slowpath and will have a check for GC polls.
source§

fn get_thread_local_buffer_granularity(&self) -> usize

Return at which granularity the allocator acquires memory from the global space and use them as thread local buffer. For example, the BumpAllocator acquires memory at 32KB blocks. Depending on the actual size for the current object, they always acquire memory of N*32KB (N>=1). Thus the BumpAllocator returns 32KB for this method. Only allocators that do thread local allocation need to implement this method.
source§

fn alloc_slow_once_precise_stress( &mut self, size: usize, align: usize, offset: usize, need_poll: bool ) -> Address

Single slowpath allocation attempt for stress test. When the stress factor is set (e.g. to N), we would expect for every N bytes allocated, we will trigger a stress GC. However, for allocators that do thread local allocation, they may allocate from their thread local buffer which does not have a GC poll check, and they may even allocate with the JIT generated allocation fastpath which is unaware of stress test GC. For both cases, we are not able to guarantee a stress GC is triggered every N bytes. To solve this, when the stress factor is set, we will call this method instead of the normal alloc_slow_once(). We expect the implementation of this slow allocation will trick the fastpath so every allocation will fail in the fastpath, jump to the slow path and eventually call this method again for the actual allocation. Read more
source§

fn on_mutator_destroy(&mut self)

The crate::plan::Mutator that includes this allocator is going to be destroyed. Some allocators may need to save/transfer its thread local data to the space.
source§

fn alloc_slow(&mut self, size: usize, align: usize, offset: usize) -> Address

Slowpath allocation attempt. This function is explicitly not inlined for performance considerations. Read more
source§

fn alloc_slow_inline( &mut self, size: usize, align: usize, offset: usize ) -> Address

Slowpath allocation attempt. This function executes the actual slowpath allocation. A slowpath allocation in MMTk attempts to allocate the object using the per-allocator definition of alloc_slow_once. This function also accounts for increasing the allocation bytes in order to support stress testing. In case precise stress testing is being used, the alloc_slow_once_precise_stress function is used instead. Read more
source§

fn alloc_slow_once_traced( &mut self, size: usize, align: usize, offset: usize ) -> Address

A wrapper method for alloc_slow_once to insert USDT tracepoints. Read more

Auto Trait Implementations§

§

impl<VM> !RefUnwindSafe for FreeListAllocator<VM>

§

impl<VM> Send for FreeListAllocator<VM>

§

impl<VM> Sync for FreeListAllocator<VM>

§

impl<VM> Unpin for FreeListAllocator<VM>

§

impl<VM> !UnwindSafe for FreeListAllocator<VM>

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> Downcast for T
where T: Any,

§

fn into_any(self: Box<T>) -> Box<dyn Any>

Convert Box<dyn Trait> (where Trait: Downcast) to Box<dyn Any>. Box<dyn Any> can then be further downcast into Box<ConcreteType> where ConcreteType implements Trait.
§

fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>

Convert Rc<Trait> (where Trait: Downcast) to Rc<Any>. Rc<Any> can then be further downcast into Rc<ConcreteType> where ConcreteType implements Trait.
§

fn as_any(&self) -> &(dyn Any + 'static)

Convert &Trait (where Trait: Downcast) to &Any. This is needed since Rust cannot generate &Any’s vtable from &Trait’s.
§

fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)

Convert &mut Trait (where Trait: Downcast) to &Any. This is needed since Rust cannot generate &mut Any’s vtable from &mut Trait’s.
§

impl<T> DowncastSync for T
where T: Any + Send + Sync,

§

fn into_any_arc(self: Arc<T>) -> Arc<dyn Any + Send + Sync>

Convert Arc<Trait> (where Trait: Downcast) to Arc<Any>. Arc<Any> can then be further downcast into Arc<ConcreteType> where ConcreteType implements Trait.
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
§

impl<T> Pointable for T

§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.