Operating System: Concurrency

By BYJU'S Exam Prep

Updated on: September 25th, 2023

Concurrency is a fundamental concept in operating systems that plays a crucial role in maximizing system efficiency and resource utilization. It refers to the ability of an operating system to handle multiple tasks and processes simultaneously, allowing them to execute concurrently. Concurrency enables efficient utilization of system resources, enhances performance, and provides a seamless user experience.

In an operating system, various processes or threads may need to execute simultaneously, either independently or in coordination with each other. This concurrent execution can lead to improved efficiency and responsiveness, as multiple tasks can progress concurrently, taking advantage of available system resources. However, managing concurrency poses significant challenges, such as avoiding conflicts, ensuring data integrity, and coordinating resource access among concurrent processes.

Download Complete Operating System Formula Notes PDF

Download Formula Notes PDF for Compiler Design

Operating System: Concurrency

Operating systems incorporate various mechanisms to facilitate concurrency management. These mechanisms include process scheduling algorithms, inter-process communication mechanisms, synchronization primitives, and memory management techniques. They enable the operating system to allocate resources, schedule processes, and coordinate their execution efficiently while maintaining data integrity and preventing conflicts.

Concurrency in operating systems also extends beyond the realm of multi-tasking and multi-threading. It encompasses broader concepts such as parallel processing, distributed systems, and the utilization of multiple processors or cores to execute tasks concurrently. These advanced forms of concurrency enable high-performance computing, distributed processing, and parallel execution of computationally intensive tasks.

Understanding and effectively managing concurrency in operating systems is vital for developing robust and efficient software applications. It requires careful consideration of synchronization mechanisms, resource allocation strategies, and proper coordination among concurrent processes. By harnessing the power of concurrency, operating systems can achieve optimal utilization of system resources, enhance performance, and deliver a seamless user experience.

A sequential program has a single thread of control. Its execution is called a process. A concurrent program has multiple threads of control. They may be executed as parallel processes. This lesson presents the main principles of concurrent programming. A concurrent program can be executed by

  • Multiprogramming: processes share one or more processors
  • Multiprocessing: each process runs on its own processor but with shared memory
  • Distributed processing: each process runs on its own processor connected by a network to others

Concurrent programs are governed by two key principles. These are the principles of safety and liveness.

  • The safety principle states Nothing bad can happen.
  • The liveness principle states Eventually, something good happens.


Safety in general means that only one thread can access the data at a time, and ensures that the data remain in a consistent state during and after the operation. Suppose, functions A and B below run concurrently. What is the resultant value of x? var x = 0; function A() {x = x + 1;} function B() {x = x + 2;}

  • x = 3 if operations x = x + 1 and x = x + 2 are atomic, i.e. cannot be interrupted.
  • x = 1,2 or 3 if operations x = x + 1 and x = x + 2 can interrupt one another.

If we read/modify/write a file and allow operations to interrupt one another, the file might be easily corrupted. Safety is ensured by implementing

  • mutual exclusion and
  • condition synchronization

when operating on shared data. Mutual exclusion means that only one thread can access the data at a time, and ensures that the data remain in a consistent state during and after the operation. (atomic update). Condition synchronization means that operations may be delayed if shared resources are in the wrong state (e.g., read from an empty buffer).


Mutual exclusion solves many safety issues but gives rise to other problems, in particular deadlock and starvation. The problem of deadlock arises when a thread holds a lock on one object and blocks attempting to gain a lock on another object because the second object is already locked by a different thread, which is blocked by the lock the original thread currently holds.Properties of Concurrent Processes

Both threads settle down to wait for the other to release the necessary lock, but neither thread will ever release their own lock because they are both blocked waiting for the other lock to be released first. Stated like this, it may seem an unlikely occurrence, but in fact, deadlock is one of the most common concurrent programming bugs. The trouble is that deadlock spreads out through more than two threads and can involve complex interdependencies. A deadlock is an extreme form of starvation. Starvation occurs when a thread cannot proceed because it cannot gain access to a resource it requires. Liveness
The problems of deadlock and starvation bring us to the next big topic in concurrent programming − liveness. Concurrent programs are also described as having a ‘liveness’ property if there are:
  • No Deadlock: some processes can always access a shared resource
  • No Starvation: all processes can eventually access shared resources

The liveness property states that eventually, something good happens. Deadlocked programs don’t meet this requirement. Liveness is gradational. Programs can be ‘nearly’ dead or ‘not very’ live. Every time you use a synchronized method, you force sequential access to an object. If you have a lot of threads calling a lot of synchronized methods on the same object, your program will slow down a lot. A programming language must provide mechanisms for Expressing Concurrency:

  • Process creation how do you specify concurrent processes?
  • Communication: how do processes exchange information?
  • Synchronization: how do processes maintain consistency?

Process Creation

Most concurrent languages offer some variant of the following Process Creation mechanisms:

  • Co-routines how do you specify concurrent processes?
  • Fork and Join: how do processes exchange information?
  • Cobegin/coend: how do processes maintain consistency?

Co-routines are only pseudo-concurrent and require explicit transfers of control:


Co-routines can be used to implement most higher-level concurrent mechanisms. The fork can be used to create any number of processes:
Join waits for another process to terminate. Fork and join are unstructured, so require care and discipline. Cobegin/coend blocks are better structured, but they can only create a fixed number of processes.

The caller continues when all of the co-blocks have terminated.
Our Apps Playstore
SSC and Bank
Other Exams
GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303
Home Practice Test Series Premium