Struct mmtk::util::heap::layout::fragmented_mapper::FragmentedMapper
source · pub struct FragmentedMapper {
lock: Mutex<()>,
inner: UnsafeCell<InnerFragmentedMapper>,
}
Fields§
§lock: Mutex<()>
§inner: UnsafeCell<InnerFragmentedMapper>
Implementations§
source§impl FragmentedMapper
impl FragmentedMapper
pub fn new() -> Self
fn new_slab() -> Box<[Atomic<MapState>; 2048]>
fn hash(addr: Address) -> usize
fn slab_table(&self, addr: Address) -> Option<&[Atomic<MapState>; 2048]>
fn get_or_allocate_slab_table(&self, addr: Address) -> &[Atomic<MapState>; 2048]
fn inner(&self) -> &InnerFragmentedMapper
fn inner_mut(&self) -> &mut InnerFragmentedMapper
fn get_or_optionally_allocate_slab_table( &self, addr: Address, allocate: bool ) -> Option<&[Atomic<MapState>; 2048]>
fn slab_table_for( &self, _addr: Address, index: usize ) -> Option<&[Atomic<MapState>; 2048]>
sourceunsafe fn commit_free_slab(&self, index: usize)
unsafe fn commit_free_slab(&self, index: usize)
Take a free slab of chunks from the freeSlabs array, and insert it at the correct index in the slabTable. @param index slab table index
§Safety
Caller must ensure that only one thread is calling this function at a time.
fn chunk_index_to_address(base: Address, chunk: usize) -> Address
sourcefn slab_align_down(addr: Address) -> Address
fn slab_align_down(addr: Address) -> Address
@param addr an address @return the base address of the enclosing slab
sourcefn slab_limit(addr: Address) -> Address
fn slab_limit(addr: Address) -> Address
@param addr an address @return the base address of the next slab
sourcefn chunk_index(slab: Address, addr: Address) -> usize
fn chunk_index(slab: Address, addr: Address) -> usize
@param slab Address of the slab @param addr Address within a chunk (could be in the next slab) @return The index of the chunk within the slab (could be beyond the end of the slab)
Trait Implementations§
source§impl Debug for FragmentedMapper
impl Debug for FragmentedMapper
source§impl Default for FragmentedMapper
impl Default for FragmentedMapper
source§impl Mmapper for FragmentedMapper
impl Mmapper for FragmentedMapper
source§fn is_mapped_address(&self, addr: Address) -> bool
fn is_mapped_address(&self, addr: Address) -> bool
Return {@code true} if the given address has been mmapped
@param addr The address in question. @return {@code true} if the given address has been mmapped
source§fn eagerly_mmap_all_spaces(&self, _space_map: &[Address])
fn eagerly_mmap_all_spaces(&self, _space_map: &[Address])
Given an address array describing the regions of virtual memory to be used
by MMTk, demand zero map all of them if they are not already mapped. Read more
source§fn mark_as_mapped(&self, start: Address, bytes: usize)
fn mark_as_mapped(&self, start: Address, bytes: usize)
Mark a number of pages as mapped, without making any
request to the operating system. Used to mark pages
that the VM has already mapped. Read more
source§fn quarantine_address_range(
&self,
start: Address,
pages: usize,
strategy: MmapStrategy,
anno: &MmapAnnotation<'_>
) -> Result<()>
fn quarantine_address_range( &self, start: Address, pages: usize, strategy: MmapStrategy, anno: &MmapAnnotation<'_> ) -> Result<()>
Quarantine/reserve address range. We mmap from the OS with no reserve and with PROT_NONE,
which should be little overhead. This ensures that we can reserve certain address range that
we can use if needed. Quarantined memory needs to be mapped before it can be used. Read more
source§fn ensure_mapped(
&self,
start: Address,
pages: usize,
strategy: MmapStrategy,
anno: &MmapAnnotation<'_>
) -> Result<()>
fn ensure_mapped( &self, start: Address, pages: usize, strategy: MmapStrategy, anno: &MmapAnnotation<'_> ) -> Result<()>
Ensure that a range of pages is mmapped (or equivalent). If the
pages are not yet mapped, demand-zero map them. Note that mapping
occurs at chunk granularity, not page granularity. Read more
impl Send for FragmentedMapper
impl Sync for FragmentedMapper
Auto Trait Implementations§
impl !RefUnwindSafe for FragmentedMapper
impl Unpin for FragmentedMapper
impl UnwindSafe for FragmentedMapper
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Downcast for Twhere
T: Any,
impl<T> Downcast for Twhere
T: Any,
§fn into_any(self: Box<T>) -> Box<dyn Any>
fn into_any(self: Box<T>) -> Box<dyn Any>
Convert
Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
.§fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
fn into_any_rc(self: Rc<T>) -> Rc<dyn Any>
Convert
Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
.§fn as_any(&self) -> &(dyn Any + 'static)
fn as_any(&self) -> &(dyn Any + 'static)
Convert
&Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s.§fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert
&mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s.§impl<T> DowncastSync for T
impl<T> DowncastSync for T
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more