It is a technique to process size more than main memory. Virtual memory is a memory management scheme that can execute a partially loaded process in the system.
Advantages of Virtual Memory:
The advantages of virtual memory are as follows:
- Allows shared address spaces between several processes.
- Logical memory space can be much larger than the physical address space
- Less I/O is needed for loading or swapping a process in memory so that each user can run faster.
- A logical address is divided into blocks called a segment, i.e., logical address space is a collection of segments. Each segment has a name and length.
- The logical address mainly has two things < segment number, offset>.
- It is a memory-management scheme that supports the following user view of memory. All the location within a segment is placed in a contiguous location in primary storage.
Advantages of Segmentation:
- It may help save memory if segments are tiny and should not be merged into one page.
- No internal fragmentation
- Segment tables: It allows only one entry per actual segment as opposed to one per page in VM
- Less overhead.
- Average segment size >> average page size
Disadvantages of Segmentation:
- External fragmentation.
- Memory management algorithms are costly.
- Segmentation: determines large free memory area.
- Segments of unequal size are not suitable for swapping.
- Paging: It keeps a list of free pages; any page is ok.
Implementation of virtual memory can be done by using any of the following:
- Demand paging
- Demand segmentation
Demand paging is a combination of paging and swapping. In demand paged virtual memory, pages are only loaded when they are demanded during program execution. Thus, as long as we have no page faults, the adequate access time is equal to the memory access time. However, if a page fault occurs, we must read the relevant page first from the disk; after this, we can access the desired word from memory.
Effective Access Time
Effective Memory access time = [((1 - p) × ma) + (p × page fault time))]
ma = Main Memory access time,
p = Probability of page fault (0 ≤ p ≤ 1)
If p = 1, it means every reference is a fault.
If p = 0, it means zero page faults.
Generally, p is close to zero. That is, there will be only a few page faults.
In a multiprogramming environment, while executing a process, a page fault occurs, and there are no free frames on the free frame list. This is called over-allocation of memory and results due to the increment in the Degree of multiprogramming. Page replacement is a mechanism to resolve this problem.
Page Replacement Algorithms: In the operating system that uses the concept of paging for virtual memory management, page replacement algorithms are used to decide which memory pages are required to be removed when another page of memory needs to be allocated in that address space. Some of the page replacement policies are as follows:
First In First Out (FIFO):
A FIFO page replacement algorithm is associated with each page since that page was brought into the memory frame. Thus, when we relace a page, the oldest page is selected.
- It is not necessary to record the time when the page is brought into the memory. We can create a FIFO queue to store all memory pages. We replace the page at the start of the queue. When it is brought into memory, we insert it at the end of the queue.
Let us suppose the following memory frames: a,b,c,a,d,c,a. Assume that the frame size of 3 (three physical frames in main memory) determines the number of page faults using FIFO.
Optimal Page Replacement- The number of page faults is 5.
It has the lowest page-fault rate of all algorithms and will never suffer from Belady's Anomaly. This is because this algorithm replaces the page that will not be used for the most extended period. Therefore, this algorithm gives the lowest page fault rate. However, it isn't easy to implement because it requires future knowledge about the usage of the page.
Beladys' Anomaly is a phenomenon for some page replacement algorithms and the page fault rate increases as the number of allocated frames increases.
Example: Suppose the following frames: a,b,c,a,d,c,a
and a frame of size 3, Find the number of page faults using the Optimal replacement algorithm.
LRU (Least Recently Used) Page Replacement: The number of page faults is 4, which is less than FIFO.
In this algorithm, the recent past is used as an approximation for the near future; then, the page which is not used for the longest time can be replaced.
- Each page table entry has its counter.
- Whenever a page is referenced, it copies the clock into the counter
- If a page needs to be replaced, then search for a page that has the smallest counter value
- It is costly.
- Stack is used
Example: Suppose the following frames: a, b, c, a, d, c, a, b
and a frame of size 3, find the number of page faults using the Least Recently Used algorithm.
The total no. of page fault occurs 5.
Working Set: WS model works on the assumption of locality.
A collection of different pages of a process actively referencing to a running program efficiently, its working set of referenced pages must be in the main memory. Otherwise, excessive paging activities might occur. WS keeps track of the active pages and their appropriate locality.
Working Set Storage Management Policy: It seeks to maintain the working sets of active processes in the main memory. At the time t, W(t,w) = set of pages referenced during the interval t-w to t. w is known as the working set size
The size of WS (working set) will affect the performance:
- If w is small, it will not encompass a working set
- If w is too large, it will encompass several localities
- If w is huge, it will encompass the complete program
- Working set variates as a process executes
- A process demand pages in its WS one page at a time
- The process gradually receives enough storage to hold the WS
Frame Allocation: The maximum number of frames allocated to a process depends on the physical memory size (i.e., the total number of available frames ). The minimum no. of frames allocated to a process depends on the instruction set architecture.
- A process is thrashing when spending more time in paging (i.e., it generates many page faults) than executing.
- Thrashing causes low CPU utilization because the processes are using thrashing queue up for the paging.
- The CPU scheduling checks the empty ready queue and then tries to increase the Degree of multiprogramming by introducing some more processes in the system. These new Degrees of multiprogramming processes cause more page faults and increase the queue length for the paging device.
You can follow the detailed champion study plan for GATE CS 2021 from the following link:
Candidates can also practice 110+ Mock tests for exams like GATE, NIELIT with Test Series check the following link:
Get unlimited access to 21+ structured Live Courses all 112+ mock tests with Online Classroom Program for GATE CS & PSU Exams:
|Important Related Links|