Address Binding: Binding of instructions and data to memory addresses.
- Compile time: if process location is known then absolute code can be generated.
- Load time: Compiler generates relocatable code which is bound at load time.
- Execution time: If a process can be moved from one memory segment to another then binding must be delayed until run time.
- Routine is not loaded until it is called.
- Better memory-space utilization;
- Unused routine is never loaded.
- Useful when large amounts of code are needed to handle infrequently occurring cases.
- No special support from the operating system is required; implemented through program design.
- Linking postponed until execution time.
- Small piece of code, stub, used to locate the appropriate memory-resident library routine.
- Stub replaces itself with the address of the routine, and executes the routine.
- Operating system needed to check if routine is in processes’ memory address
Overlays: This techniques allow to keep in memory only those instructions and data, which are required at given time. The other instruction and data is loaded into the memory space occupied by the previous ones when they are needed.
Swapping: Consider an environment which supports multiprogramming using say Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished executing for one time quantum, it is swapped out of memory to a backup store.
The memory manager then picks up another process from the backup store and loads it into the memory occupied by the previous process. Then, the scheduler picks up another process and allocates the CPU to it.
Memory Management Techniques
The main memory must accommodate both the operating system and the various user processes. The parts of the main memory must be allocated in the most efficient way possible.
There are two ways for memory allocation as given below
Single Partition Allocation: The memory is divided into two parts. One to be used by OS and the other one is for user programs. The OS code and date is protected from being modified by user programs using a base register.
Multiple Partition Allocation: The multiple partition allocation may be further classified as:
Fixed Partition Scheme: Memory is divided into a number of fixed size partitions. Then, each partition holds one process. This scheme supports multiprogramming as a number of processes may be brought into memory and the CPU can be switched from one process to another.
When a process arrives for execution, it is put into the input queue of the smallest partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at any time, a set of holes exists, which consists of holes of various sizes scattered throughout the memory.
When a process arrives and needs memory, this set of holes is searched for a hole which is large enough to hold the process. If the hole is too large, it is split into two parts. The unused part is added to the set of holes. All holes which are adjacent to each other are merged.
There are different ways of implementing allocation of partitions from a list of free holes, such as:
- first-fit: allocate the first hole that is big enough
- best-fit: allocate the smallest hole that is small enough; the entire list of holes must be searched, unless it is ordered by size
- next-fit: scan holes from the location of the last allocation and choose the next available block that is large enough (can be implemented using a circular linked list)
Disadvantages of Memory Management Techniques
The above schemes cause external and internal fragmentation of the memory as given below
- External Fragmentation: When there is enough total memory in the system to satisfy the requirements of a process but the memory is not contiguous.
- Internal Fragmentation: The memory wasted inside the allocated blocks of memory called internal fragmentation.
e. g., consider a process requiring 150 k, thus if a hold of size 170 k is allocated to it the remaining 20 k is wasted.
Compaction: This is strategy to solve the problem of external fragmentation. All free memory is placed together by moving processes to new locations.
- It is a memory management technique, which allows the memory to be allocated to the process wherever it is available.
- Physical memory is divided into fixed size blocks called frames.
- Logical memory is broken into blocks of same size called pages.
- The backing store is also divided into same size blocks.
- When a process is to be executed its pages are loaded into available page frames.
- A frame is a collection of contiguous pages.
- Every logical address generated by the CPU is divided into two parts: The page number (P) and the page offset (d).
- The page number is used as an index into a page table.
- Each entry in the page table contains the base address of the page in physical memory (f).
- The base address of the Pth entry is then combined with the offset (d) to give the actual address in memory.
Implementation of Page Table:
- Page table is kept in main memory.
- Page-table base register (PTBR) points to the page table.
- Page-table length register (PRLR) indicates size of the page table.
- In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction.
- The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
- Associative Memory:
- Frame number is available within memory.
- Each entry in memory has two portions: Page number and frame number.
- Paging Hardware With TLB:
Effective Access Time
Let Associative Lookup = t time unit,
Assume memory cycle time is 1 microsecond
Hit ratio: percentage of times that a page number is found in the associative registers; ration related to
number of associative registers.
Hit ratio = h
Effective Access Time (EAT) EAT = (1 + t) h + (2 + t)(1 – h) = 2 + t – h
Memory Protection: Memory protection implemented by associating protection bit with each frame.
Valid-invalid bit attached to each entry in the page table:
- “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page.
- “invalid” indicates that the page is not in the process’ logical address space.
Page Table Structure:
- Hierarchical Paging: Break up the logical address space into multiple page tables.
Example: Two level paging
- Hashed Page Tables: The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted.
- Inverted Page Tables: One entry for each real page of memory. Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page.
Paging Advantages :
- Allocating memory is easy and cheap
- Any free page is ok, OS can take first one out of list it keeps
- Eliminates external fragmentation
- Data (page frames) can be scattered all over PM
- Pages are mapped appropriately anyway
- Allows demand paging and prepaging
- More efficient swapping
- No need for considerations about fragmentation
- Just swap out page least likely to be used
Paging Disadvantages :
- Longer memory access times (page table lookup)
- Can be improved using TLB
- Guarded page tables
- Inverted page tables
- Memory requirements (one entry per VM page)
- Improve using Multilevel page tables and variable page sizes (super-pages)
- Guarded page tables
- Page Table Length Register (PTLR) to limit virtual memory size
- Internal fragmentation
You can follow the detailed champion study plan for GATE CS 2021 from the following link:
Candidates can also practice 110+ Mock tests for exams like GATE, NIELIT with BYJU's Exam Prep, Test Series check the following link:
Get unlimited access to 21+ structured Live Courses all 112+ mock tests with BYJU's Exam Prep Online Classroom Program for GATE CS & PSU Exams: