Struct mmtk::policy::largeobjectspace::LargeObjectSpace
source · pub struct LargeObjectSpace<VM: VMBinding> {
common: CommonSpace<VM>,
pr: FreeListPageResource<VM>,
mark_state: u8,
in_nursery_gc: bool,
treadmill: TreadMill,
}
Expand description
This type implements a policy for large objects. Each instance corresponds to one Treadmill space.
Fields§
§common: CommonSpace<VM>
§pr: FreeListPageResource<VM>
§mark_state: u8
§in_nursery_gc: bool
§treadmill: TreadMill
Implementations§
source§impl<VM: VMBinding> LargeObjectSpace<VM>
impl<VM: VMBinding> LargeObjectSpace<VM>
pub fn new( args: PlanCreateSpaceArgs<'_, VM>, protect_memory_on_release: bool ) -> Self
pub fn prepare(&mut self, full_heap: bool)
pub fn release(&mut self, full_heap: bool)
pub fn trace_object<Q: ObjectQueue>( &self, queue: &mut Q, object: ObjectReference ) -> ObjectReference
fn sweep_large_pages(&mut self, sweep_nursery: bool)
sourcepub fn allocate_pages(&self, tls: VMThread, pages: usize) -> Address
pub fn allocate_pages(&self, tls: VMThread, pages: usize) -> Address
Allocate an object
sourcefn test_and_mark(&self, object: ObjectReference, value: u8) -> bool
fn test_and_mark(&self, object: ObjectReference, value: u8) -> bool
Test if the object’s mark bit is the same as the given value. If it is not the same, the method will attemp to mark the object and clear its nursery bit. If the attempt succeeds, the method will return true, meaning the object is marked by this invocation. Otherwise, it returns false.
fn test_mark_bit(&self, object: ObjectReference, value: u8) -> bool
sourcefn is_in_nursery(&self, object: ObjectReference) -> bool
fn is_in_nursery(&self, object: ObjectReference) -> bool
Check if a given object is in nursery
Trait Implementations§
source§impl<VM: VMBinding> PolicyTraceObject<VM> for LargeObjectSpace<VM>
impl<VM: VMBinding> PolicyTraceObject<VM> for LargeObjectSpace<VM>
source§fn trace_object<Q: ObjectQueue, const KIND: u8>(
&self,
queue: &mut Q,
object: ObjectReference,
_copy: Option<CopySemantics>,
_worker: &mut GCWorker<VM>
) -> ObjectReference
fn trace_object<Q: ObjectQueue, const KIND: u8>( &self, queue: &mut Q, object: ObjectReference, _copy: Option<CopySemantics>, _worker: &mut GCWorker<VM> ) -> ObjectReference
Trace object in the policy. If the policy copies objects, we should
expect
copy
to be a Some
value.source§fn may_move_objects<const KIND: u8>() -> bool
fn may_move_objects<const KIND: u8>() -> bool
Return whether the policy moves objects.
source§fn post_scan_object(&self, _object: ObjectReference)
fn post_scan_object(&self, _object: ObjectReference)
Policy-specific post-scan-object hook. It is called after scanning
each object in this space.
source§impl<VM: VMBinding> SFT for LargeObjectSpace<VM>
impl<VM: VMBinding> SFT for LargeObjectSpace<VM>
source§fn is_live(&self, object: ObjectReference) -> bool
fn is_live(&self, object: ObjectReference) -> bool
Is the object live, determined by the policy?
fn pin_object(&self, _object: ObjectReference) -> bool
fn unpin_object(&self, _object: ObjectReference) -> bool
fn is_object_pinned(&self, _object: ObjectReference) -> bool
source§fn is_movable(&self) -> bool
fn is_movable(&self) -> bool
Is the object movable, determined by the policy? E.g. the policy is non-moving,
or the object is pinned.
source§fn is_sane(&self) -> bool
fn is_sane(&self) -> bool
Is the object sane? A policy should return false if there is any abnormality about
object - the sanity checker will fail if an object is not sane.
source§fn initialize_object_metadata(&self, object: ObjectReference, alloc: bool)
fn initialize_object_metadata(&self, object: ObjectReference, alloc: bool)
Initialize object metadata (in the header, or in the side metadata).
source§fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
Is
addr
a valid object reference to an object allocated in this space?
This default implementation works for all spaces that use MMTk’s mapper to allocate memory.
Some spaces, like MallocSpace
, use third-party libraries to allocate memory.
Such spaces needs to override this method.fn find_object_from_internal_pointer( &self, ptr: Address, max_search_bytes: usize ) -> Option<ObjectReference>
source§fn sft_trace_object(
&self,
queue: &mut VectorObjectQueue,
object: ObjectReference,
_worker: GCWorkerMutRef<'_>
) -> ObjectReference
fn sft_trace_object( &self, queue: &mut VectorObjectQueue, object: ObjectReference, _worker: GCWorkerMutRef<'_> ) -> ObjectReference
Trace objects through SFT. This along with
SFTProcessEdges
provides an easy way for most plans to trace objects without the need to implement any plan-specific
code. However, tracing objects for some policies are more complicated, and they do not provide an
implementation of this method. For example, mark compact space requires trace twice in each GC.
Immix has defrag trace and fast trace.source§fn get_forwarded_object(
&self,
_object: ObjectReference
) -> Option<ObjectReference>
fn get_forwarded_object( &self, _object: ObjectReference ) -> Option<ObjectReference>
Get forwarding pointer if the object is forwarded.
source§fn is_reachable(&self, object: ObjectReference) -> bool
fn is_reachable(&self, object: ObjectReference) -> bool
Is the object reachable, determined by the policy?
Note: Objects in ImmortalSpace may have
is_live = true
but are actually unreachable.source§fn is_in_space(&self, _object: ObjectReference) -> bool
fn is_in_space(&self, _object: ObjectReference) -> bool
Is the object managed by MMTk? For most cases, if we find the sft for an object, that means
the object is in the space and managed by MMTk. However, for some spaces, like MallocSpace,
we mark the entire chunk in the SFT table as a malloc space, but only some of the addresses
in the space contain actual MMTk objects. So they need a further check.
source§impl<VM: VMBinding> Space<VM> for LargeObjectSpace<VM>
impl<VM: VMBinding> Space<VM> for LargeObjectSpace<VM>
fn as_space(&self) -> &dyn Space<VM>
fn as_sft(&self) -> &(dyn SFT + Sync + 'static)
fn get_page_resource(&self) -> &dyn PageResource<VM>
source§fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
Get a mutable reference to the underlying page resource, or
None
if the space does not
have a page resource.source§fn initialize_sft(&self, sft_map: &mut dyn SFTMap)
fn initialize_sft(&self, sft_map: &mut dyn SFTMap)
Initialize entires in SFT map for the space. This is called when the Space object
has a non-moving address, as we will use the address to set sft.
Currently after we create a boxed plan, spaces in the plan have a non-moving address.
fn common(&self) -> &CommonSpace<VM>
fn release_multiple_pages(&mut self, start: Address)
source§fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)
fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)
Enumerate objects in the current space. Read more
source§fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
A check for the obvious out-of-memory case: if the requested size is larger than
the heap size, it is definitely an OOM. We would like to identify that, and
allows the binding to deal with OOM. Without this check, we will attempt
to allocate from the page resource. If the requested size is unrealistically large
(such as
usize::MAX
), it breaks the assumptions of our implementation of
page resource, vm map, etc. This check prevents that, and allows us to
handle the OOM case.
Each allocator that may request an arbitrary size should call this method before
acquring memory from the space. For example, bump pointer allocator and large object
allocator need to call this method. On the other hand, allocators that only allocate
memory in fixed size blocks do not need to call this method.
An allocator should call this method before doing any computation on the size to
avoid arithmatic overflow. If we have to do computation in the allocation fastpath and
overflow happens there, there is nothing we can do about it.
Return a boolean to indicate if we will be out of memory, determined by the check.fn acquire(&self, tls: VMThread, pages: usize) -> Address
fn address_in_space(&self, start: Address) -> bool
fn in_space(&self, object: ObjectReference) -> bool
source§fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
This is called after we get result from page resources. The space may
tap into the hook to monitor heap growth. The call is made from within the
page resources’ critical region, immediately before yielding the lock. Read more
source§fn ensure_mapped(&self)
fn ensure_mapped(&self)
Ensure this space is marked as mapped – used when the space is already
mapped (e.g. for a vm image which is externally mmapped.)
fn reserved_pages(&self) -> usize
source§fn available_physical_pages(&self) -> usize
fn available_physical_pages(&self) -> usize
Return the number of physical pages available.
fn get_name(&self) -> &'static str
fn get_gc_trigger(&self) -> &GCTrigger<VM>
source§fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
What copy semantic we should use for this space if we copy objects from this space.
This is only needed for plans that use SFTProcessEdges
source§fn verify_side_metadata_sanity(
&self,
side_metadata_sanity_checker: &mut SideMetadataSanity
)
fn verify_side_metadata_sanity( &self, side_metadata_sanity_checker: &mut SideMetadataSanity )
Ensure that the current space’s metadata context does not have any issues.
Panics with a suitable message if any issue is detected.
It also initialises the sanity maps which will then be used if the
extreme_assertions
feature is active.
Internally this calls verify_metadata_context() from util::metadata::sanity
Read moreAuto Trait Implementations§
impl<VM> !RefUnwindSafe for LargeObjectSpace<VM>
impl<VM> Send for LargeObjectSpace<VM>
impl<VM> Sync for LargeObjectSpace<VM>
impl<VM> Unpin for LargeObjectSpace<VM>where
VM: Unpin,
impl<VM> !UnwindSafe for LargeObjectSpace<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more