Operating System: Memory Management

By BYJU'S Exam Prep

Updated on: September 25th, 2023

Operating systems rely on efficient memory management to ensure smooth and effective utilization of a computer’s memory resources. Memory management is a critical aspect of an operating system, responsible for organizing, tracking, and allocating memory to processes and applications running on the system.

In essence, memory management facilitates the abstraction of physical memory into logical addresses, offering a uniform view of each process. This abstraction enables processes to access memory without worrying about the underlying hardware details. It involves various techniques, such as virtual memory, paging, and segmentation, to effectively manage memory allocation and optimize system performance. By intelligently distributing memory among active processes and handling memory requests, memory management plays a vital role in enhancing multitasking capabilities and overall system stability.

Download Formulas for GATE Computer Science Engineering – Operating Systems

Operating System: Memory Management

Memory management techniques allow several processes to share memory. When several processes are in memory they can share the CPU, thus increasing CPU utilization.

Address Binding: Binding of instructions and data to memory addresses.

  1. Compile time: if the process location is known then absolute code can be generated.
  2. Load time: The compiler generates relocatable code which is bound at load time.
  3. Execution time: If a process can be moved from one memory segment to another then binding must be delayed until run time.

Dynamic Loading

  • The routine is not loaded until it is called.
  • Better memory-space utilization;
  • The unused routine is never loaded.
  • Useful when large amounts of code are needed to handle infrequently occurring cases.
  • No special support from the operating system is required; implemented through program design.

Dynamic Linking

  • Linking is postponed until execution time.
  • A small piece of code, stub, is used to locate the appropriate memory-resident library routine.
  • Stub replaces itself with the address of the routine, and executes the routine.
  • The operating system needed to check if the routine is in the processes’ memory address

Overlays: These techniques allow to keep in memory only those instructions and data, which are required at a given time. The other instruction and data are loaded into the memory space occupied by the previous ones when they are needed.

Swapping: Consider an environment which supports multiprogramming using say Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished executing for one time quantum, it is swapped out of memory to a backup store.


The memory manager then picks up another process from the backup store and loads it into the memory occupied by the previous process. Then, the scheduler picks up another process and allocates the CPU to it.

Memory Management Techniques

The main memory must accommodate both the operating system and the various user processes. The parts of the main memory must be allocated in the most efficient way possible.

There are two ways for memory allocation as given below

Single Partition Allocation: The memory is divided into two parts. One to be used by OS and the other one is for user programs. The OS code and data are protected from being modified by user programs using a base register.


Multiple Partition Allocation

The multiple partition allocation may be further classified as:

Fixed Partition Scheme

Memory is divided into a number of fixed-size partitions. Then, each partition holds one process. This scheme supports multiprogramming as a number of processes may be brought into memory and the CPU can be switched from one process to another. When a process arrives for execution, it is put into the input queue of the smallest partition, which is large enough to hold it.

Variable Partition Scheme

A block of available memory is designated as a hole at any time, a set of holes exists, which consists of holes of various sizes scattered throughout the memory. When a process arrives and needs memory, this set of holes is searched for a hole which is large enough to hold the process. If the hole is too large, it is split into two parts. The unused part is added to the set of holes. All holes which are adjacent to each other are merged.

There are different ways of implementing the allocation of partitions from a list of free holes, such as:

  • first-fit: allocate the first hole that is big enough
  • best-fit: allocate the smallest hole that is small enough; the entire list of holes must be searched unless it is ordered by size
  • next-fit: scan holes from the location of the last allocation and choose the next available block that is large enough (can be implemented using a circular linked list)

Disadvantages of Memory Management Techniques

The above schemes cause external and internal fragmentation of the memory as given below

  • External Fragmentation: When there is enough total memory in the system to satisfy the requirements of a process but the memory is not contiguous.
  • Internal Fragmentation: The memory wasted inside the allocated blocks of memory is called internal fragmentation.

e. g., consider a process requiring 150k, thus if a hold of size 170k is allocated to it the remaining 20k is wasted.

Compaction: This is a strategy to solve the problem of external fragmentation. All free memory is placed together by moving processes to new locations.


  • It is a memory management technique, which allows the memory to be allocated to the process wherever it is available.
  • Physical memory is divided into fixed-size blocks called frames.
  • Logical memory is broken into blocks of the same size called pages.
  • The backing store is also divided into same-size blocks.
  • When a process is to be executed its pages are loaded into available page frames.
  • A frame is a collection of contiguous pages.
  • Every logical address generated by the CPU is divided into two parts: The page number (P) and the page offset (d).
  • The page number is used as an index in a page table.


  • Each entry in the page table contains the base address of the page in physical memory (f).
  • The base address of the Pth entry is then combined with the offset (d) to give the actual address in memory.

Paging Example:



Implementation of Page Table

  • The page table is kept in the main memory.
  • Page-table base register (PTBR) points to the page table.
  • Page-table length register (PRLR) indicates the size of the page table.
  • In this scheme, every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction.
  • The two memory access problems can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
  1. Associative Memory:
  • The frame number is available within memory.
  • Each entry in memory has two portions: Page number and frame number.
  1. Paging Hardware With TLB:


  • Effective Access Time 

    Let Associative Lookup = t time unit,

    Assume memory cycle time is 1 microsecond

    Hit ratio: percentage of times that a page number is found in the associative registers; ration related to

                       number of associative registers.

    Hit ratio = h

    Effective Access Time (EAT) EAT = (1 + t) h + (2 + t)(1 – h) = 2 + t – h

Memory Protection: Memory protection is implemented by associating the protection bit with each frame.

Valid-invalid bit attached to each entry in the page table:

  1. “valid” indicates that the associated page is in the process’ logical address space, and is thus a  legal page.
  1. “invalid” indicates that the page is not in the process logical address space.



Page Table Structure

  1. Hierarchical Paging: Break up the logical address space into multiple-page tables.

Example: Two-level paging


  1. Hashed Page Tables: The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted.
  2. Inverted Page Tables: One entry for each real page of memory. Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page.

Paging Advantages

  • Allocating memory is easy and cheap
  • Any free page is ok, OS can take the first one out of the list it keeps
  • Eliminates external fragmentation
  • Data (page frames) can be scattered all over the PM
  • Pages are mapped appropriately anyway
  • Allows demand paging and prepaging
  • More efficient swapping
  • No need for considerations about fragmentation
  • Just swap out the page least likely to be used

Paging Disadvantages

  • Longer memory access times (page table lookup)
  • Can be improved using TLB
  • Guarded page tables
  • Inverted page tables
  • Memory requirements (one entry per VM page)
  • Improve using Multilevel page tables and variable page sizes (super-pages)
  • Guarded page tables
  • Page Table Length Register (PTLR) to limit virtual memory size
  • Internal fragmentation

You can follow the detailed champion study plan for GATE CS 2024 from the following link:

Detailed GATE CSE 2024 Champion Study Plan

Candidates can also practice 110+ Mock tests for exams like GATE, and NIELIT with BYJU’s Exam Prep, and Test Series check the following link:

Click Here to Avail GATE CSE Test Series! (100+ Mock Tests)

Get unlimited access to 21+ structured Live Courses and all 112+ mock tests with BYJU’s Exam Prep Online Classroom Program for GATE CS & PSU Exams:

Click here to avail Online Classroom Program for Computer Science Engineering


Get complete information about the GATE exam pattern, cut-off, and all those related things on the BYJU’S Exam Prep official youtube channel.

Our Apps Playstore
SSC and Bank
Other Exams
GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303
Home Practice Test Series Premium