Struct mmtk::policy::marksweepspace::malloc_ms::global::MallocSpace
source · pub struct MallocSpace<VM: VMBinding> {
phantom: PhantomData<VM>,
active_bytes: AtomicUsize,
active_pages: AtomicUsize,
pub chunk_addr_min: Atomic<Address>,
pub chunk_addr_max: Atomic<Address>,
metadata: SideMetadataContext,
scheduler: Arc<GCWorkScheduler<VM>>,
gc_trigger: Arc<GCTrigger<VM>>,
active_mem: Mutex<HashMap<Address, usize>>,
pub total_work_packets: AtomicU32,
pub completed_work_packets: AtomicU32,
pub work_live_bytes: AtomicUsize,
}
Expand description
This space uses malloc to get new memory, and performs mark-sweep for the memory.
Fields§
§phantom: PhantomData<VM>
§active_bytes: AtomicUsize
§active_pages: AtomicUsize
§chunk_addr_min: Atomic<Address>
§chunk_addr_max: Atomic<Address>
§metadata: SideMetadataContext
§scheduler: Arc<GCWorkScheduler<VM>>
Work packet scheduler
gc_trigger: Arc<GCTrigger<VM>>
§active_mem: Mutex<HashMap<Address, usize>>
§total_work_packets: AtomicU32
§completed_work_packets: AtomicU32
§work_live_bytes: AtomicUsize
Implementations§
source§impl<VM: VMBinding> MallocSpace<VM>
impl<VM: VMBinding> MallocSpace<VM>
pub fn extend_global_side_metadata_specs(specs: &mut Vec<SideMetadataSpec>)
pub fn new(args: PlanCreateSpaceArgs<'_, VM>) -> Self
sourcefn set_page_mark(&self, start: Address, size: usize)
fn set_page_mark(&self, start: Address, size: usize)
Set multiple pages, starting from the given address, for the given size, and increase the active page count if we set any page mark in the region. This is a thread-safe method, and can be used during mutator phase when mutators may access the same page. Performance-wise, this method may impose overhead, as we are doing a compare-exchange for every page in the range.
sourceunsafe fn unset_page_mark(&self, start: Address, size: usize)
unsafe fn unset_page_mark(&self, start: Address, size: usize)
Unset multiple pages, starting from the given address, for the given size, and decrease the active page count if we unset any page mark in the region
§Safety
We need to ensure that only one GC thread is accessing the range.
pub fn alloc( &self, tls: VMThread, size: usize, align: usize, offset: usize ) -> Address
pub fn free(&self, addr: Address)
fn free_internal(&self, addr: Address, bytes: usize, offset_malloc_bit: bool)
pub fn trace_object<Q: ObjectQueue>( &self, queue: &mut Q, object: ObjectReference ) -> ObjectReference
fn map_metadata_and_update_bound(&self, addr: Address, size: usize)
pub fn prepare(&mut self)
pub fn release(&mut self)
pub fn end_of_gc(&mut self)
pub fn sweep_chunk(&self, chunk_start: Address)
sourcefn get_malloc_addr_size(object: ObjectReference) -> (Address, bool, usize)
fn get_malloc_addr_size(object: ObjectReference) -> (Address, bool, usize)
Given an object in MallocSpace, return its malloc address, whether it is an offset malloc, and malloc size
sourcefn clean_up_empty_chunk(&self, chunk_start: Address)
fn clean_up_empty_chunk(&self, chunk_start: Address)
Clean up for an empty chunk
sourcefn sweep_object(
&self,
object: ObjectReference,
empty_page_start: &mut Address
) -> bool
fn sweep_object( &self, object: ObjectReference, empty_page_start: &mut Address ) -> bool
Sweep an object if it is dead, and unset page marks for empty pages before this object. Return true if the object is swept.
sourcefn debug_sweep_chunk_done(&self, live_bytes_in_the_chunk: usize)
fn debug_sweep_chunk_done(&self, live_bytes_in_the_chunk: usize)
Used when each chunk is done. Only called in debug build.
sourcefn sweep_chunk_mark_on_side(
&self,
chunk_start: Address,
mark_bit_spec: SideMetadataSpec
)
fn sweep_chunk_mark_on_side( &self, chunk_start: Address, mark_bit_spec: SideMetadataSpec )
This function is called when the mark bits sit on the side metadata. This has been optimized with the use of bulk loading and bulk zeroing of metadata.
This function uses non-atomic accesses to side metadata (although these
non-atomic accesses should not have race conditions associated with them)
as well as calls libc functions (malloc_usable_size()
, free()
)
sourcefn sweep_chunk_mark_in_header(&self, chunk_start: Address)
fn sweep_chunk_mark_in_header(&self, chunk_start: Address)
This sweep function is called when the mark bit sits in the object header
This function uses non-atomic accesses to side metadata (although these
non-atomic accesses should not have race conditions associated with them)
as well as calls libc functions (malloc_usable_size()
, free()
)
fn sweep_each_object_in_chunk(&self, chunk_start: Address)
Trait Implementations§
source§impl<VM: VMBinding> PolicyTraceObject<VM> for MallocSpace<VM>
impl<VM: VMBinding> PolicyTraceObject<VM> for MallocSpace<VM>
source§fn trace_object<Q: ObjectQueue, const KIND: u8>(
&self,
queue: &mut Q,
object: ObjectReference,
_copy: Option<CopySemantics>,
_worker: &mut GCWorker<VM>
) -> ObjectReference
fn trace_object<Q: ObjectQueue, const KIND: u8>( &self, queue: &mut Q, object: ObjectReference, _copy: Option<CopySemantics>, _worker: &mut GCWorker<VM> ) -> ObjectReference
copy
to be a Some
value.source§fn may_move_objects<const KIND: u8>() -> bool
fn may_move_objects<const KIND: u8>() -> bool
source§fn post_scan_object(&self, _object: ObjectReference)
fn post_scan_object(&self, _object: ObjectReference)
source§impl<VM: VMBinding> SFT for MallocSpace<VM>
impl<VM: VMBinding> SFT for MallocSpace<VM>
source§fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
fn is_mmtk_object(&self, addr: Address) -> Option<ObjectReference>
For malloc space, we just use the side metadata.
source§fn is_live(&self, object: ObjectReference) -> bool
fn is_live(&self, object: ObjectReference) -> bool
fn pin_object(&self, _object: ObjectReference) -> bool
fn unpin_object(&self, _object: ObjectReference) -> bool
fn is_object_pinned(&self, _object: ObjectReference) -> bool
source§fn is_movable(&self) -> bool
fn is_movable(&self) -> bool
source§fn is_sane(&self) -> bool
fn is_sane(&self) -> bool
source§fn is_in_space(&self, object: ObjectReference) -> bool
fn is_in_space(&self, object: ObjectReference) -> bool
fn find_object_from_internal_pointer( &self, ptr: Address, max_search_bytes: usize ) -> Option<ObjectReference>
source§fn initialize_object_metadata(&self, object: ObjectReference, _alloc: bool)
fn initialize_object_metadata(&self, object: ObjectReference, _alloc: bool)
source§fn sft_trace_object(
&self,
queue: &mut VectorObjectQueue,
object: ObjectReference,
_worker: GCWorkerMutRef<'_>
) -> ObjectReference
fn sft_trace_object( &self, queue: &mut VectorObjectQueue, object: ObjectReference, _worker: GCWorkerMutRef<'_> ) -> ObjectReference
SFTProcessEdges
provides an easy way for most plans to trace objects without the need to implement any plan-specific
code. However, tracing objects for some policies are more complicated, and they do not provide an
implementation of this method. For example, mark compact space requires trace twice in each GC.
Immix has defrag trace and fast trace.source§fn get_forwarded_object(
&self,
_object: ObjectReference
) -> Option<ObjectReference>
fn get_forwarded_object( &self, _object: ObjectReference ) -> Option<ObjectReference>
source§fn is_reachable(&self, object: ObjectReference) -> bool
fn is_reachable(&self, object: ObjectReference) -> bool
is_live = true
but are actually unreachable.source§impl<VM: VMBinding> Space<VM> for MallocSpace<VM>
impl<VM: VMBinding> Space<VM> for MallocSpace<VM>
fn as_space(&self) -> &dyn Space<VM>
fn as_sft(&self) -> &(dyn SFT + Sync + 'static)
fn get_page_resource(&self) -> &dyn PageResource<VM>
source§fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
fn maybe_get_page_resource_mut(&mut self) -> Option<&mut dyn PageResource<VM>>
None
if the space does not
have a page resource.fn common(&self) -> &CommonSpace<VM>
fn get_gc_trigger(&self) -> &GCTrigger<VM>
source§fn initialize_sft(&self, _sft_map: &mut dyn SFTMap)
fn initialize_sft(&self, _sft_map: &mut dyn SFTMap)
fn release_multiple_pages(&mut self, _start: Address)
fn in_space(&self, object: ObjectReference) -> bool
fn address_in_space(&self, _start: Address) -> bool
fn get_name(&self) -> &'static str
fn reserved_pages(&self) -> usize
source§fn verify_side_metadata_sanity(
&self,
side_metadata_sanity_checker: &mut SideMetadataSanity
)
fn verify_side_metadata_sanity( &self, side_metadata_sanity_checker: &mut SideMetadataSanity )
extreme_assertions
feature is active.
Internally this calls verify_metadata_context() from util::metadata::sanity
Read moresource§fn enumerate_objects(&self, _enumerator: &mut dyn ObjectEnumerator)
fn enumerate_objects(&self, _enumerator: &mut dyn ObjectEnumerator)
source§fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
fn will_oom_on_acquire(&self, tls: VMThread, size: usize) -> bool
usize::MAX
), it breaks the assumptions of our implementation of
page resource, vm map, etc. This check prevents that, and allows us to
handle the OOM case.
Each allocator that may request an arbitrary size should call this method before
acquring memory from the space. For example, bump pointer allocator and large object
allocator need to call this method. On the other hand, allocators that only allocate
memory in fixed size blocks do not need to call this method.
An allocator should call this method before doing any computation on the size to
avoid arithmatic overflow. If we have to do computation in the allocation fastpath and
overflow happens there, there is nothing we can do about it.
Return a boolean to indicate if we will be out of memory, determined by the check.fn acquire(&self, tls: VMThread, pages: usize) -> Address
source§fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
fn grow_space(&self, start: Address, bytes: usize, new_chunk: bool)
source§fn ensure_mapped(&self)
fn ensure_mapped(&self)
source§fn available_physical_pages(&self) -> usize
fn available_physical_pages(&self) -> usize
source§fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
fn set_copy_for_sft_trace(&mut self, _semantics: Option<CopySemantics>)
Auto Trait Implementations§
impl<VM> !RefUnwindSafe for MallocSpace<VM>
impl<VM> Send for MallocSpace<VM>
impl<VM> Sync for MallocSpace<VM>
impl<VM> Unpin for MallocSpace<VM>where
VM: Unpin,
impl<VM> !UnwindSafe for MallocSpace<VM>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more