Operating System: CPU Scheduling

By BYJU'S Exam Prep

Updated on: September 25th, 2023

In modern operating systems, the efficient utilization of the CPU (Central Processing Unit) is crucial for optimal performance. This is where CPU scheduling comes into play. CPU scheduling is a core component of an operating system that determines how processes are allocated and executed on the CPU.

The primary objective of CPU scheduling is to maximize system throughput and ensure fairness in process execution. As multiple processes compete for CPU time, the scheduler decides which process should run next based on predefined algorithms. These algorithms take into account various factors such as process priorities, burst times, and waiting times to make informed decisions about process dispatching and switching. By efficiently managing the CPU’s resources and handling process execution, CPU scheduling plays a vital role in enhancing system responsiveness and overall performance.

Download Formulas for GATE Computer Science Engineering – Operating Systems

Operating System: CPU Scheduling

All of the processes which are ready to execute are placed in the main memory then the selection of one of those processes is known as scheduling, and after selection that process gets the control of the CPU.

Scheduling Criteria: The criteria for comparing CPU scheduling algorithms include the following

  • CPU Utilization:This means keeping the CPU as busy as possible.
  • Throughput: It is nothing but the measure of work i.e., the number of processes that are completed per time unit.
  • Turnaround Time: The interval from the time of submission of a process to the time of completion. It is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O.
  • Waiting Time: The sum of the periods spent waiting in the ready queue.
  • Response Time: The time from the submission of a request until the first response is produced.

First Come First Served (FCFS) Scheduling

  • With this scheme, the process that requests the CPU first is allocated the CPU first.
  • The implementation of the FCFS policy is easily managed with the FIFO queue.
  • When a process enters the ready queue, its PCB (Process Control Block) is linked to the tail of the queue.
  • When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue.
  • FCFS scheduling is non-preemptive, very simple, and can be implemented with a FIFO queue, Not a good choice when there are variable burst times (CPU bound and I/O bound), drawback: causes short processes to wait for longer ones.

Shortest Job First (SJF) Scheduling

  • When the CPU is available, it is assigned to the process that has the smallest next CPU burst.
  • If the two processes have the same length or amount of the next CPU burst, FCFS scheduling is used to break the tie.
  • A more appropriate term for this scheduling method would be the shortest next CPU burst algorithm because scheduling depends on the length of the next CPU burst of a process rather than its total length.
  • SJF works based on burst times (not total job time), difficult to get this information, must approximate it for short-term scheduling, used frequently for long-term scheduling, preemptive (SJF) or non-preemptive (SRTF), and Special cases of priority scheduling.

Preemptive and Non-preemptive Algorithm

  • The SJF algorithm can either be preemptive or non-preemptive.
  • The choice arises when a new process arrives at the ready queue while a previous process is still executing.
  • The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process.
  • A preemptive SJF algorithm will preempt the currently executing process, whereas a, non-preemptive algorithm will allow the currently running process to finish its CPU burst.
  • Preemptive SJF scheduling is sometimes called shortest-remaining-time-first-scheduling,
Process Table
Waiting time for P1 = 10 – 1 = 9
Waiting time for P2 = 1 – 1 = 0
Waiting time for P3 = 17 – 2 = 15
Waiting time for P4 = 5 – 3
Average waiting time

Priority Scheduling

  • A priority is associated with each process and the CPU is allocated to the process with the highest priority, Equal priority processes are scheduled in FCFS order.
  • We can be provided that low numbers represent high priority or low numbers represent low priority, According to the question, we need to assume anyone of the above.
  • Priority scheduling can be either preemptive or non-preemptive.
  • A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.
  • A non-preemptive priority scheduling algorithm will simply put the new process at the head of the ready queue.
A major problem with the priority scheduling algorithm is indefinite blocking or starvation.
Process Table
Waiting time for P1 = 6
Waiting time for P2 = 0
Waiting time for P3 = 16
Waiting time for P4 = 18
Waiting time for P5 = 1
Average waiting time

Round Robin (RR) Scheduling

  • The RR scheduling algorithm is designed especially for time-sharing systems.
  • It is similar to FCFS scheduling but preemption is added to switch between processes.
  • A small unit of time called a time quantum or time slice is defined.
  • If the time quantum is too large, this just becomes FCFS, Since context-switching costs time, the rule of thumb is 80% of CPU bursts should be shorter than the time quantum.
Process Table
Let’s take time quantum = 4 ms. Then the resulting RR schedule is as Follows
P1 waits for 6 ms (10 – 4), P2 waits for 4 ms and P3 waits for 7 ms.
Average waiting timeimage013
Multilevel Queue Scheduling:
  • This scheduling algorithm has been created for saturation in which processes are easily classified into different groups
  • A multilevel queue scheduling algorithm partitions the ready queue into several separate queues.
  • The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority or process type.

Multilevel Feedback Queue Scheduling

  • This scheduling algorithm allows a process to move between queues.
  • The idea is to separate processes according to the characteristics of their CPU bursts.
  • If a process uses too much CPU time, it will be moved to a lower-priority queue.
  • Similarly, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of ageing prevents starvation.
CPU Overhead:
  • First In First Out: Low
  • Shortest job first: Medium
  • Priority-based Scheduling: Medium
  • Multilevel Queue Scheduling: High


  • First In First Out: Low
  • Shortest job first: High
  • Priority-based Scheduling: Low
  • Multilevel Queue Scheduling: Medium

Turn Around Time:

  • First In First Out: High
  • Shortest job first: Medium
  • Priority-based Scheduling: Medium
  • Multilevel Queue Scheduling: Medium

Response Time:

  • First In First Out: Low
  • Shortest job first: Medium
  • Priority-based Scheduling: High
Multilevel Queue Scheduling: Medium

You can follow the detailed champion study plan for GATE CS from the following link:

Detailed GATE CSE Champion Study Plan

Candidates can also practice 110+ Mock tests for exams like GATE, and NIELIT with Test Series check the following link:

Click Here to Avail GATE CSE Test Series! (100+ Mock Tests)

Get unlimited access to 21+ structured Live Courses and all 112+ mock tests with Online Classroom Program for GATE CS PSU Exams:

Click here to avail Online Classroom Program for Computer Science Engineering

Get complete information about the GATE exam pattern, cut-off, and all those related things on the BYJU’S Exam Prep official youtube channel.

Our Apps Playstore
SSC and Bank
Other Exams
GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303
Home Practice Test Series Premium