Struct mmtk::policy::marksweepspace::native_ms::global::MarkSweepSpace
source · pub struct MarkSweepSpace<VM: VMBinding> {
pub common: CommonSpace<VM>,
pr: BlockPageResource<VM, Block>,
chunk_map: ChunkMap,
scheduler: Arc<GCWorkScheduler<VM>>,
abandoned: Mutex<AbandonedBlockLists>,
abandoned_in_gc: Mutex<AbandonedBlockLists>,
pending_release_packets: AtomicUsize,
}
Expand description
A mark sweep space.
The space and each free list allocator own some block lists. A block that is in use belongs to exactly one of the block lists. In this case, whoever owns a block list has exclusive access on the blocks in the list. There should be no data race to access blocks. A thread should NOT access a block list if it does not own the block list.
The table below roughly describes what we do in each phase.
Phase | Allocator local block lists | Global abandoned block lists | Chunk map |
---|---|---|---|
Allocation | Alloc from local | Move blocks from global to local block lists | - |
Lazy: sweep local blocks | |||
GC - Prepare | - | - | Find used chunks, reset block mark, bzero mark bit |
GC - Trace | Trace object and mark blocks. | Trace object and mark blocks. | - |
No block list access. | No block list access. | ||
GC - Release | Lazy: Move blocks to local unswept list | Lazy: Move blocks to global unswept list | _ |
Eager: Sweep local blocks | Eager: Sweep global blocks | ||
Both: Return local blocks to a temp global list | |||
GC - End of GC | - | Merge the temp global lists | - |
Fields§
§common: CommonSpace<VM>
§pr: BlockPageResource<VM, Block>
§chunk_map: ChunkMap
Allocation status for all chunks in MS space
scheduler: Arc<GCWorkScheduler<VM>>
Work packet scheduler
abandoned: Mutex<AbandonedBlockLists>
Abandoned blocks. If a mutator dies, all its blocks go to this abandoned block lists. We reuse blocks in these lists in the mutator phase. The space needs to do the release work for these block lists.
abandoned_in_gc: Mutex<AbandonedBlockLists>
Abandoned blocks during a GC. Each allocator finishes doing release work, and returns their local blocks to the global lists. Thus we do not need to do release work for these block lists in the space. These lists are only filled in the release phase, and will be moved to the abandoned lists above at the end of a GC.
pending_release_packets: AtomicUsize
Count the number of pending ReleaseMarkSweepSpace
and ReleaseMutator
work packets during
the Release
stage.
Implementations§
source§impl<VM: VMBinding> MarkSweepSpace<VM>
impl<VM: VMBinding> MarkSweepSpace<VM>
pub fn extend_global_side_metadata_specs(_specs: &mut Vec<SideMetadataSpec>)
pub fn new(args: PlanCreateSpaceArgs<'_, VM>) -> MarkSweepSpace<VM>
fn trace_object<Q: ObjectQueue>( &self, queue: &mut Q, object: ObjectReference ) -> ObjectReference
pub fn record_new_block(&self, block: Block)
pub fn prepare(&mut self)
pub fn release(&mut self)
pub fn end_of_gc(&mut self)
sourcepub fn release_block(&self, block: Block)
pub fn release_block(&self, block: Block)
Release a block.
pub fn block_clear_metadata(&self, block: Block)
pub fn acquire_block( &self, tls: VMThread, size: usize, align: usize ) -> BlockAcquireResult
pub fn get_abandoned_block_lists(&self) -> &Mutex<AbandonedBlockLists>
pub fn get_abandoned_block_lists_in_gc(&self) -> &Mutex<AbandonedBlockLists>
pub fn release_packet_done(&self)
fn generate_sweep_tasks(&self) -> Vec<Box<dyn GCWork<VM>>>
fn recycle_blocks(&self)
Trait Implementations§
source§impl<VM: VMBinding> PolicyTraceObject<VM> for MarkSweepSpace<VM>
impl<VM: VMBinding> PolicyTraceObject<VM> for MarkSweepSpace<VM>
source§fn trace_object<Q: ObjectQueue, const KIND: u8>(
&self,
queue: &mut Q,
object: ObjectReference,
_copy: Option<CopySemantics>,
_worker: &mut GCWorker<VM>
) -> ObjectReference
fn trace_object<Q: ObjectQueue, const KIND: u8>( &self, queue: &mut Q, object: ObjectReference, _copy: Option<CopySemantics>, _worker: &mut GCWorker<VM> ) -> ObjectReference
copy
to be a Some
value.source§fn may_move_objects<const KIND: u8>() -> bool
fn may_move_objects<const KIND: u8>() -> bool
source§fn post_scan_object(&self, _object: ObjectReference)
fn post_scan_object(&self, _object: ObjectReference)
source§impl<VM: VMBinding> SFT for MarkSweepSpace<VM>
impl<VM: VMBinding> SFT for MarkSweepSpace<VM>
source§fn is_live(&self, object: ObjectReference) -> bool
fn is_live(&self, object: ObjectReference) -> bool
fn pin_object(&self, _object: ObjectReference) -> bool
fn unpin_object(&self, _object: ObjectReference) -> bool
fn is_object_pinned(&self, _object: ObjectReference) -> bool
source§fn is_movable(&self) -> bool
fn is_movable(&self) -> bool
source§fn is_sane(&self) -> bool
fn is_sane(&self) -> bool
source§fn initialize_object_metadata(&self, _object: ObjectReference, _alloc: bool)
fn initialize_object_metadata(&self, _object: ObjectReference, _alloc: bool)
source§fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
addr
a valid object reference to an object allocated in this space?
This default implementation works for all spaces that use MMTk’s mapper to allocate memory.
Some spaces, like MallocSpace
, use third-party libraries to allocate memory.
Such spaces needs to override this method.fn find_object_from_internal_pointer( &self, ptr: Address, max_search_bytes: usize ) -> Option<ObjectReference>
source§fn sft_trace_object(
&self,
queue: &mut VectorObjectQueue,
object: ObjectReference,
_worker: GCWorkerMutRef<'_>
) -> ObjectReference
fn sft_trace_object( &self, queue: &mut VectorObjectQueue, object: ObjectReference, _worker: GCWorkerMutRef<'_> ) -> ObjectReference
SFTProcessEdges
provides an easy way for most plans to trace objects without the need to implement any plan-specific
code. However, tracing objects for some policies are more complicated, and they do not provide an
implementation of this method. For example, mark compact space requires trace twice in each GC.
Immix has defrag trace and fast trace.source§fn get_forwarded_object(
&self,
_object: ObjectReference
) -> Option<ObjectReference>
fn get_forwarded_object( &self, _object: ObjectReference ) -> Option<ObjectReference>
source§fn is_reachable(&self, object: ObjectReference) -> bool
fn is_reachable(&self, object: ObjectReference) -> bool
is_live = true
but are actually unreachable.source§fn is_in_space(&self, _object: ObjectReference) -> bool
fn is_in_space(&self, _object: ObjectReference) -> bool
source§impl<VM: VMBinding> Space<VM> for MarkSweepSpace<VM>
impl<VM: VMBinding> Space<VM> for MarkSweepSpace<VM>
fn as_space(&self) -> &dyn Space<VM>
fn as_sft(&self) -> &(dyn SFT + Sync + 'static)
fn get_page_resource(&self) -> &dyn PageResource<VM>
source§fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
None
if the space does not
have a page resource.source§fn initialize_sft(&self, sft_map: &mut dyn SFTMap)
fn initialize_sft(&self, sft_map: &mut dyn SFTMap)
fn common(&self) -> &CommonSpace<VM>
fn release_multiple_pages(&mut self, _start: Address)
source§fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)
fn enumerate_objects(&self, enumerator: &mut dyn ObjectEnumerator)
source§fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
usize::MAX
), it breaks the assumptions of our implementation of
page resource, vm map, etc. This check prevents that, and allows us to
handle the OOM case.
Each allocator that may request an arbitrary size should call this method before
acquring memory from the space. For example, bump pointer allocator and large object
allocator need to call this method. On the other hand, allocators that only allocate
memory in fixed size blocks do not need to call this method.
An allocator should call this method before doing any computation on the size to
avoid arithmatic overflow. If we have to do computation in the allocation fastpath and
overflow happens there, there is nothing we can do about it.
Return a boolean to indicate if we will be out of memory, determined by the check.fn acquire(&self, tls: VMThread, pages: usize) -> Address
fn address_in_space(&self, start: Address) -> bool
fn in_space(&self, object: ObjectReference) -> bool
source§fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
source§fn ensure_mapped(&self)
fn ensure_mapped(&self)
fn reserved_pages(&self) -> usize
source§fn available_physical_pages(&self) -> usize
fn available_physical_pages(&self) -> usize
fn get_name(&self) -> &'static str
fn get_descriptor(&self) -> SpaceDescriptor
fn get_gc_trigger(&self) -> &GCTrigger<VM>
source§fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
source§fn verify_side_metadata_sanity(
&self,
side_metadata_sanity_checker: &mut SideMetadataSanity
)
fn verify_side_metadata_sanity( &self, side_metadata_sanity_checker: &mut SideMetadataSanity )
extreme_assertions
feature is active.
Internally this calls verify_metadata_context() from util::metadata::sanity
Read moreimpl<VM: VMBinding> Sync for MarkSweepSpace<VM>
Auto Trait Implementations§
impl<VM> !RefUnwindSafe for MarkSweepSpace<VM>
impl<VM> Send for MarkSweepSpace<VM>
impl<VM> Unpin for MarkSweepSpace<VM>where
VM: Unpin,
impl<VM> !UnwindSafe for MarkSweepSpace<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more