Operating System: Process & Threads

By BYJU'S Exam Prep

Updated on: September 25th, 2023

Operating systems play a critical role in the functionality and management of computer systems. One of the fundamental aspects of an operating system is its ability to handle processes and threads. Processes and threads are essential components that enable efficient multitasking and resource allocation within a system. The operating system’s role in managing processes and threads is crucial for maintaining system stability, responsiveness, and resource utilization. It ensures fair scheduling of processes and threads, preventing conflicts and providing efficient execution. Additionally, it facilitates communication and synchronization between different processes and threads, enabling them to cooperate and share information when necessary.

Understanding the concepts of processes and threads is fundamental to grasping the complexities and functionalities of modern operating systems. Efficient process and thread management are crucial for achieving optimal performance, multitasking capabilities, and effective resource allocation within computer systems.

Download Complete Operating System Formula Notes PDF

Download Formula Notes PDF for Compiler Design

Operating System: Process & Threads

Process: A process can be defined in any of the following ways

  • A process is a program in execution.
  • It is an asynchronous activity.
  • It is the entity to which processors are assigned.
  • It is a dispatchable unit.
  • It is the unit of work in a system.

A process is more than a program code. It also includes the current activity as represented by the following:

  • The current value of the Program Counter (PC)
  • Contents of the processor registers
  • Value of the variables
  • The process stack which contains temporary data such as subroutine parameters, return addresses, and temporary variables.
  • A data section that contains global variables.

Process in Memory

Each process is represented in the as by a Process Control Block (PCB) also called a task control block.

PCB: A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. Process The PCB contains important information about the specific process including

  • The current state of the process i.e., whether it is ready, running, waiting, etc.
  • Unique identification of the process in order to track which is which information.
  • A pointer to the parent process.
  • Similarly, a pointer to the child process (if it exists).
  • The priority of process (a part of CPU scheduling information).
  • Pointers to locate memory of processes.
  • A register save area.
  • The processor it is running on.

Process State Model


Process state: The process state consists of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least the following:

  • Code for the program.
  • Program’s static data.
  • Program’s dynamic data.
  • Program’s procedure call stack.
  • Contents of general purpose registers.
  • Contents of the program counter (PC)
  • Contents of program status word (PSW).
  • Operating Systems resource in use.

A process goes through a series of discrete process states.

  • New State: The process being created.
  • Running State: A process is said to be running if it has the CPU, that is, a process actually using the CPU at that particular instant.
  • Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.
  • Ready State: A process is said to be ready if it uses a CPU if one were available. A ready-state process is runnable but temporarily stopped running to let another process run.
  • Terminated state: The process has finished execution.


A process migrates among various scheduling queues throughout its lifetime. The OS must select for scheduling purposes, processes from those queues in some fashion. The selection process is carried out by the appropriate scheduler.

  • Long-Term Scheduler: A long-term scheduler or job scheduler selects processes from the job pool (mass storage device, where processes are kept for later execution) and loads them into memory for execution. The long-term scheduler controls the degree of multiprogramming (the number of processes in memory).
  • Short-Term Scheduler: A short-term scheduler or CPU scheduler selects from the main memory among the processes that are ready to execute and allocates the CPU to one of them.
  • Medium-Term Scheduler: The medium-term scheduler available in all systems is responsible for the swapping in and out operations which means loading the process into, the main memory from secondary memory (swap in) and taking out the process from main memory and storing it into the secondary memory (swap out). schedulers


  • It is the module that gives control of the CPU to the process selected by the short-term scheduler.
  • Functions of Dispatcher: Switching context, Switching to user mode, and Jumping to the proper location in the user program to restart that program.

The Fork()

A system called fork() is used to create processes. It takes no arguments and returns a process ID. The purpose of fork() is to create a new process, which becomes the child process of the caller. After a new child process is created, both processes will execute the next instruction following the fork() system call. Therefore, we have to distinguish the parent from the child. This can be done by testing the returned value of fork():

  • If fork() returns a negative value, the creation of a child process was unsuccessful.
  • fork() returns a zero to the newly created child process.
  • fork() returns a positive value, the process ID of the child process, to the parent. The returned process ID is of type pid_t defined in sys/types.h. Normally, the process ID is an integer. Moreover, a process can use the function getpid() to retrieve the process ID assigned to this process.

Therefore, after the system calls to fork(), a simple test can tell which process is the child. Please note that Unix will make an exact copy of the parent’s address space and give it to the child. Therefore, the parent and child processes have separate address spaces. Example: Calculate the number of times hello is printed.

int main()



printf (hello\n);
return 0;


The number of times Hello printed is equal to the number of processes created. Total Number of Processes = 2n where n is the number of fork system calls. So here n = 3, 23 = 8.

What are Threads?

Thread: A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. An operating system that has thread facility, the basic unit of CPU utilization is a thread.

  • A thread can be in any of several states (Running, Blocked, Ready or Terminated).
  • Each thread has its own stack.
  • A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.


An application typically is implemented as a separate process with several threads of control. In some situations, a single application may be required to perform several similar tasks. e.g., a web server accepts client requests for web pages, images, sound, and so on.

  • Threads share CPU and only one thread is active (running) at a time.
  • Threads within a process, threads within a process execute sequentially.
  • Thread can create children.
  • If one thread is blocked, another thread can run.
  • Threads are not independent of one another.
  • All threads can access every address in the task.
  • Threads are designed to assist one other.

Multithreading Model: There are two types of threads.

  1. User threads
  2. Kernel threads

User Threads: They are above the kernel and they are managed without kernel support. User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call the operating system and cause interruptions to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.

Kernel Threads: Kernel threads are supported and managed directly by the operating system. Instead of a thread table in each process, the kernel has a thread table that keeps track of all threads in the system. In addition, the kernel also maintains the traditional process table to keep track of processes. The operating Systems kernel provides system calls to create and manage threads. Kernel-level threads are slower, inefficient, complex, overhead, and weightier compared to User level threads. There are three common ways of establishing the relationship between user threads and kernel threads

  1. Many-to-many model
  2. One-to-one model
  3. Many-to-one model
  • The one-to-one model maps each user thread to corresponding kernel threads.
  • Many-to-many model multiplexes many user threads to a smaller or equal number of kernel threads.
  • The many-to-one model maps many user threads to single kernel threads.
  • User-level threads are threads that are visible to the programmer and unknown to the Kernel.
  • User-level threads are faster to create and manage than that Kernel threads.

Download Byjus Exam Prep App for the best Preparation

Online Classroom Program

BYJU’S Exam Prep Test Series

Our Apps Playstore
SSC and Bank
Other Exams
GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303
Home Practice Test Series Premium