Trait mmtk::policy::space::Space

source ·
pub trait Space<VM: VMBinding>: 'static + SFT + Sync + Downcast {
Show 21 methods // Required methods fn as_space(&self) -> &dyn Space<VM>; fn as_sft(&self) -> &(dyn SFT + Sync + 'static); fn get_page_resource(&self) -> &dyn PageResource<VM>; fn maybe_get_page_resource_mut( &mut self ) -> Option<&mut dyn PageResource<VM>>; fn initialize_sft(&self, sft_map: &mut dyn SFTMap); fn common(&self) -> &CommonSpace<VM>; fn release_multiple_pages(&mut self, start: Address); fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator); // Provided methods fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool { ... } fn acquire(&self, tls: VMThread, pages: usize) -> Address { ... } fn address_in_space(&self, start: Address) -> bool { ... } fn in_space(&self, object: ObjectReference) -> bool { ... } fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool) { ... } fn ensure_mapped(&self) { ... } fn reserved_pages(&self) -> usize { ... } fn available_physical_pages(&self) -> usize { ... } fn get_name(&self) -> &'static str { ... } fn get_descriptor(&self) -> SpaceDescriptor { ... } fn get_gc_trigger(&self) -> &GCTrigger<VM> { ... } fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>) { ... } fn verify_side_metadata_sanity( &self, side_metadata_sanity_checker: &mut SideMetadataSanity ) { ... }
}

Required Methods§

source

fn as_space(&self) -> &dyn Space<VM>

source

fn as_sft(&self) -> &(dyn SFT + Sync + 'static)

source

fn get_page_resource(&self) -> &dyn PageResource<VM>

source

fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>

Get a mutable reference to the underlying page resource, or None if the space does not have a page resource.

source

fn initialize_sft(&self, sft_map: &mut dyn SFTMap)

Initialize entires in SFT map for the space. This is called when the Space object has a non-moving address, as we will use the address to set sft. Currently after we create a boxed plan, spaces in the plan have a non-moving address.

source

fn common(&self) -> &CommonSpace<VM>

source

fn release_multiple_pages(&mut self, start: Address)

source

fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)

Enumerate objects in the current space.

Implementers can use the enumerator to report

  • individual objects within the space using enumerator.visit_object, and
  • ranges of address that may contain objects using enumerator.visit_address_range. The caller will then enumerate objects in the range using the VO bits metadata.

Each object in the space shall be covered by one of the two methods above.

§Implementation considerations

Skipping empty ranges: When enumerating address ranges, spaces can skip ranges (blocks, chunks, etc.) that are guarenteed not to contain objects.

Dynamic dispatch: Because Space is a trait object type and enumerator is a dyn reference, invoking methods of enumerator involves a dynamic dispatching. But the overhead is OK if we call it a block at a time because scanning the VO bits will dominate the execution time. For LOS, it will be cheaper to enumerate individual objects than scanning VO bits because it is sparse.

Provided Methods§

source

fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool

A check for the obvious out-of-memory case: if the requested size is larger than the heap size, it is definitely an OOM. We would like to identify that, and allows the binding to deal with OOM. Without this check, we will attempt to allocate from the page resource. If the requested size is unrealistically large (such as usize::MAX), it breaks the assumptions of our implementation of page resource, vm map, etc. This check prevents that, and allows us to handle the OOM case. Each allocator that may request an arbitrary size should call this method before acquring memory from the space. For example, bump pointer allocator and large object allocator need to call this method. On the other hand, allocators that only allocate memory in fixed size blocks do not need to call this method. An allocator should call this method before doing any computation on the size to avoid arithmatic overflow. If we have to do computation in the allocation fastpath and overflow happens there, there is nothing we can do about it. Return a boolean to indicate if we will be out of memory, determined by the check.

source

fn acquire(&self, tls: VMThread, pages: usize) -> Address

source

fn address_in_space(&self, start: Address) -> bool

source

fn in_space(&self, object: ObjectReference) -> bool

source

fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)

This is called after we get result from page resources. The space may tap into the hook to monitor heap growth. The call is made from within the page resources’ critical region, immediately before yielding the lock.

@param start The start of the newly allocated space @param bytes The size of the newly allocated space @param new_chunk {@code true} if the new space encroached upon or started a new chunk or chunks.

source

fn ensure_mapped(&self)

Ensure this space is marked as mapped – used when the space is already mapped (e.g. for a vm image which is externally mmapped.)

source

fn reserved_pages(&self) -> usize

source

fn available_physical_pages(&self) -> usize

Return the number of physical pages available.

source

fn get_name(&self) -> &'static str

source

fn get_descriptor(&self) -> SpaceDescriptor

source

fn get_gc_trigger(&self) -> &GCTrigger<VM>

source

fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)

What copy semantic we should use for this space if we copy objects from this space. This is only needed for plans that use SFTProcessEdges

source

fn verify_side_metadata_sanity( &self, side_metadata_sanity_checker: &mut SideMetadataSanity )

Ensure that the current space’s metadata context does not have any issues. Panics with a suitable message if any issue is detected. It also initialises the sanity maps which will then be used if the extreme_assertions feature is active. Internally this calls verify_metadata_context() from util::metadata::sanity

This function is called once per space by its parent plan but may be called multiple times per policy.

Arguments:

  • side_metadata_sanity_checker: The SideMetadataSanity object instantiated in the calling plan.

Implementations§

source§

impl<VM> dyn Space<VM>
where VM: Any + 'static + VMBinding,

source

pub fn is<__T: Space<VM>>(&self) -> bool

Returns true if the trait object wraps an object of type __T.

source

pub fn downcast<__T: Space<VM>>(self: Box<Self>) -> Result<Box<__T>, Box<Self>>

Returns a boxed object from a boxed trait object if the underlying object is of type __T. Returns the original boxed trait if it isn’t.

source

pub fn downcast_rc<__T: Space<VM>>(self: Rc<Self>) -> Result<Rc<__T>, Rc<Self>>

Returns an Rc-ed object from an Rc-ed trait object if the underlying object is of type __T. Returns the original Rc-ed trait if it isn’t.

source

pub fn downcast_ref<__T: Space<VM>>(&self) -> Option<&__T>

Returns a reference to the object within the trait object if it is of type __T, or None if it isn’t.

source

pub fn downcast_mut<__T: Space<VM>>(&mut self) -> Option<&mut __T>

Returns a mutable reference to the object within the trait object if it is of type __T, or None if it isn’t.

Implementors§

source§

impl<VM: VMBinding> Space<VM> for CopySpace<VM>

source§

impl<VM: VMBinding> Space<VM> for ImmixSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for ImmortalSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for LargeObjectSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for MarkCompactSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for MallocSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for MarkSweepSpace<VM>

source§

impl<VM: VMBinding> Space<VM> for VMSpace<VM>