Struct mmtk::policy::lockfreeimmortalspace::LockFreeImmortalSpace
source · pub struct LockFreeImmortalSpace<VM: VMBinding> {
name: &'static str,
cursor: Atomic<Address>,
limit: Address,
start: Address,
total_bytes: usize,
slow_path_zeroing: bool,
metadata: SideMetadataContext,
gc_trigger: Arc<GCTrigger<VM>>,
}
Expand description
This type implements a lock free version of the immortal collection policy. This is close to the OpenJDK’s epsilon GC. Different from the normal ImmortalSpace, this version should only be used by NoGC plan, and it now uses the whole heap range.
Fields§
§name: &'static str
§cursor: Atomic<Address>
Heap range start
limit: Address
Heap range end
start: Address
start of this space
total_bytes: usize
Total bytes for the space
slow_path_zeroing: bool
Zero memory after slow-path allocation
metadata: SideMetadataContext
§gc_trigger: Arc<GCTrigger<VM>>
Implementations§
source§impl<VM: VMBinding> LockFreeImmortalSpace<VM>
impl<VM: VMBinding> LockFreeImmortalSpace<VM>
pub fn new(args: PlanCreateSpaceArgs<'_, VM>) -> Self
Trait Implementations§
source§impl<VM: VMBinding> PolicyTraceObject<VM> for LockFreeImmortalSpace<VM>
impl<VM: VMBinding> PolicyTraceObject<VM> for LockFreeImmortalSpace<VM>
source§fn trace_object<Q: ObjectQueue, const KIND: u8>(
&self,
_queue: &mut Q,
_object: ObjectReference,
_copy: Option<CopySemantics>,
_worker: &mut GCWorker<VM>
) -> ObjectReference
fn trace_object<Q: ObjectQueue, const KIND: u8>( &self, _queue: &mut Q, _object: ObjectReference, _copy: Option<CopySemantics>, _worker: &mut GCWorker<VM> ) -> ObjectReference
Trace object in the policy. If the policy copies objects, we should
expect
copy
to be a Some
value.source§fn may_move_objects<const KIND: u8>() -> bool
fn may_move_objects<const KIND: u8>() -> bool
Return whether the policy moves objects.
source§fn post_scan_object(&self, _object: ObjectReference)
fn post_scan_object(&self, _object: ObjectReference)
Policy-specific post-scan-object hook. It is called after scanning
each object in this space.
source§impl<VM: VMBinding> SFT for LockFreeImmortalSpace<VM>
impl<VM: VMBinding> SFT for LockFreeImmortalSpace<VM>
source§fn is_live(&self, _object: ObjectReference) -> bool
fn is_live(&self, _object: ObjectReference) -> bool
Is the object live, determined by the policy?
fn pin_object(&self, _object: ObjectReference) -> bool
fn unpin_object(&self, _object: ObjectReference) -> bool
fn is_object_pinned(&self, _object: ObjectReference) -> bool
source§fn is_movable(&self) -> bool
fn is_movable(&self) -> bool
Is the object movable, determined by the policy? E.g. the policy is non-moving,
or the object is pinned.
source§fn is_sane(&self) -> bool
fn is_sane(&self) -> bool
Is the object sane? A policy should return false if there is any abnormality about
object - the sanity checker will fail if an object is not sane.
source§fn initialize_object_metadata(&self, _object: ObjectReference, _alloc: bool)
fn initialize_object_metadata(&self, _object: ObjectReference, _alloc: bool)
Initialize object metadata (in the header, or in the side metadata).
source§fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
Is
addr
a valid object reference to an object allocated in this space?
This default implementation works for all spaces that use MMTk’s mapper to allocate memory.
Some spaces, like MallocSpace
, use third-party libraries to allocate memory.
Such spaces needs to override this method.fn find_object_from_internal_pointer( &self, ptr: Address, max_search_bytes: usize ) -> Option<ObjectReference>
source§fn sft_trace_object(
&self,
_queue: &mut VectorObjectQueue,
_object: ObjectReference,
_worker: GCWorkerMutRef<'_>
) -> ObjectReference
fn sft_trace_object( &self, _queue: &mut VectorObjectQueue, _object: ObjectReference, _worker: GCWorkerMutRef<'_> ) -> ObjectReference
Trace objects through SFT. This along with
SFTProcessEdges
provides an easy way for most plans to trace objects without the need to implement any plan-specific
code. However, tracing objects for some policies are more complicated, and they do not provide an
implementation of this method. For example, mark compact space requires trace twice in each GC.
Immix has defrag trace and fast trace.source§fn get_forwarded_object(
&self,
_object: ObjectReference
) -> Option<ObjectReference>
fn get_forwarded_object( &self, _object: ObjectReference ) -> Option<ObjectReference>
Get forwarding pointer if the object is forwarded.
source§fn is_reachable(&self, object: ObjectReference) -> bool
fn is_reachable(&self, object: ObjectReference) -> bool
Is the object reachable, determined by the policy?
Note: Objects in ImmortalSpace may have
is_live = true
but are actually unreachable.source§fn is_in_space(&self, _object: ObjectReference) -> bool
fn is_in_space(&self, _object: ObjectReference) -> bool
Is the object managed by MMTk? For most cases, if we find the sft for an object, that means
the object is in the space and managed by MMTk. However, for some spaces, like MallocSpace,
we mark the entire chunk in the SFT table as a malloc space, but only some of the addresses
in the space contain actual MMTk objects. So they need a further check.
source§impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM>
impl<VM: VMBinding> Space<VM> for LockFreeImmortalSpace<VM>
source§fn get_name(&self) -> &'static str
fn get_name(&self) -> &'static str
Get the name of the space
We have to override the default implementation because LockFreeImmortalSpace doesn’t have a common space
source§fn verify_side_metadata_sanity(
&self,
side_metadata_sanity_checker: &mut SideMetadataSanity
)
fn verify_side_metadata_sanity( &self, side_metadata_sanity_checker: &mut SideMetadataSanity )
We have to override the default implementation because LockFreeImmortalSpace doesn’t put metadata in a common space
fn as_space(&self) -> &dyn Space<VM>
fn as_sft(&self) -> &(dyn SFT + Sync + 'static)
fn get_page_resource(&self) -> &dyn PageResource<VM>
source§fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
Get a mutable reference to the underlying page resource, or
None
if the space does not
have a page resource.fn common(&self) -> &CommonSpace<VM>
fn get_gc_trigger(&self) -> &GCTrigger<VM>
fn release_multiple_pages(&mut self, _start: Address)
source§fn initialize_sft(&self, sft_map: &mut dyn SFTMap)
fn initialize_sft(&self, sft_map: &mut dyn SFTMap)
Initialize entires in SFT map for the space. This is called when the Space object
has a non-moving address, as we will use the address to set sft.
Currently after we create a boxed plan, spaces in the plan have a non-moving address.
fn reserved_pages(&self) -> usize
fn acquire(&self, _tls: VMThread, pages: usize) -> Address
source§fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)
fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)
Enumerate objects in the current space. Read more
source§fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
A check for the obvious out-of-memory case: if the requested size is larger than
the heap size, it is definitely an OOM. We would like to identify that, and
allows the binding to deal with OOM. Without this check, we will attempt
to allocate from the page resource. If the requested size is unrealistically large
(such as
usize::MAX
), it breaks the assumptions of our implementation of
page resource, vm map, etc. This check prevents that, and allows us to
handle the OOM case.
Each allocator that may request an arbitrary size should call this method before
acquring memory from the space. For example, bump pointer allocator and large object
allocator need to call this method. On the other hand, allocators that only allocate
memory in fixed size blocks do not need to call this method.
An allocator should call this method before doing any computation on the size to
avoid arithmatic overflow. If we have to do computation in the allocation fastpath and
overflow happens there, there is nothing we can do about it.
Return a boolean to indicate if we will be out of memory, determined by the check.fn address_in_space(&self, start: Address) -> bool
fn in_space(&self, object: ObjectReference) -> bool
source§fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
This is called after we get result from page resources. The space may
tap into the hook to monitor heap growth. The call is made from within the
page resources’ critical region, immediately before yielding the lock. Read more
source§fn ensure_mapped(&self)
fn ensure_mapped(&self)
Ensure this space is marked as mapped – used when the space is already
mapped (e.g. for a vm image which is externally mmapped.)
source§fn available_physical_pages(&self) -> usize
fn available_physical_pages(&self) -> usize
Return the number of physical pages available.
source§fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
What copy semantic we should use for this space if we copy objects from this space.
This is only needed for plans that use SFTProcessEdges
Auto Trait Implementations§
impl<VM> !RefUnwindSafe for LockFreeImmortalSpace<VM>
impl<VM> Send for LockFreeImmortalSpace<VM>
impl<VM> Sync for LockFreeImmortalSpace<VM>
impl<VM> Unpin for LockFreeImmortalSpace<VM>
impl<VM> !UnwindSafe for LockFreeImmortalSpace<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more