hamburger

Operating System Study Notes for IBPS SO IT Officer Exam

By BYJU'S Exam Prep

Updated on: September 25th, 2023

Operating System is an important chapter in the IBPS SO IT officer exam 2022. You must have a stronghold in this chapter to score well in the IBPS SO exam.

IBPS will release the IBPS SO vacancies for various posts such as I.T. Officer, Agricultural Field Officer, Rajbhasha Adhikari, Law Officer, HR/Personnel Officer, and Marketing Officer. In this article, we are covering the study material on the operating system, that will help you succeed in the upcoming IBPS (SO) Exam 2022.

What is an Operating System?

An operating system acts as an intermediary between the user of a computer and the computer hardware. An Operating System (OS) is software that manages the computer hardware.

  • Hardware: It provides the basic computing resources for the system. It consists of CPU, memory, and input/output (I/O) devices.
  • Application Programs: Define the ways in which these resources are used to solve users’ computing problems. e.g., word processors, spreadsheets, compilers, and web browsers.

What are the Components of a Computer System?

  • Process Management: The operating system manages many kinds of activities ranging from user programs to system programs like printer spoolers, name servers, file servers, etc.
  • Main-Memory Management: Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main-memory provides storage that can be accessed directly by the CPU. That is to say for a program to be executed, it must be in the main memory.
  • File Management: A file is collected from related information defined by its creator. The computer can store files on the disk (secondary storage), which provides long-term storage. Some examples of storage media are magnetic tape, a magnetic disk, and an optical disk. Each of these media has its own properties like speed, capacity, data transfer rate, and access methods.
  • I/O System Management: I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device driver knows the peculiarities of the specific device to whom it is assigned.
  • Secondary-Storage Management: Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space.
  • Protection System: Protection refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system.
  • Networking: generalizes network access
  • Command-Interpreter System: interface between the user and the OS.

Functions of Operating System

      • Memory Management
      •  Processor Management
      • Device Management
      • Storage Management
      • Application Interface
      • User Interface
      • Security

Operating System Services

Many services are provided by OS to the user’s programs.

  • Program Execution: The operating system helps to load a program into memory and run it.
  • I/O Operations: Each running program may request for I/O operation and for efficiency and protection the users cannot control I/O devices directly. Thus, the operating system must provide some means to do I/O operations.
  • File System Manipulation: Files are the most important part which is needed by programs to read and write the files and files may also be created and deleted by names or by the programs. The operating system is responsible for file management.
  • Communications: Many times, one process needs to exchange information with another process, this exchange of information can take place between the processes executing on the same computer or the exchange of information may occur between the process executing on the different computer systems, tied together by a computer network. All these things are taken care of by the operating system.
  • Error Detection: It is necessary that the operating system must be aware of possible errors and should take the appropriate action to ensure correct and consistent computing.

Important Tasks of the Operating System:

The Operating system can perform a Single Operation and also Multiple Operations at a Time. So there are many types of Operating systems that are organized by using their Working Techniques.

1. Serial Processing: The Serial Processing Operating Systems are those which Perform all the instructions in a Sequence Manner or the Instructions that are given by the user will be executed by using the FIFO Manner means First in First Out. Mainly the Punch Cards are used for this. In this all the Jobs are firstly Prepared and Stored on the Card and after that card will be entered into the system and after that, all the instructions will be executed one by One. But the Main Problem is that a user doesn’t interact with the System while he is working on the System, which means the user can not be able to enter the data for Execution.

2. Batch Processing: The Batch Processing is the same as the Serial Processing Technique. But in the Batch Processing, Similar Types of jobs are Firstly Prepared and they are Stored on the Card and that card will be Submit to the System for the Processing. The Main Problem is that the Jobs that are prepared for Execution must be the Same Type and if a job requires any type of Input then this will not be possible for the user. The Batch Contains the Jobs and all those jobs will be executed without the user Intervention.

\

3. Multi-Programming: Execute Multiple Programs on the System at a Time and in the Multi-programming the CPU will never get idle because with the help of Multi-Programming we can Execute Many Programs on the System and When we are Working with the Program then we can also Submit the Second or Another Program for Running and the CPU will then Execute the Second Program after the completion of the First Program. And in this, we can also specify our Input means a user can also interact with the System.

\

4. Real-Time System: In this Response Time is already fixed. This means time to Display the Results after Possessing has been fixed by the Processor or CPU. Real-Time System is used in those places in which we Require higher and Timely Responses.

  • Hard Real-Time System: In the Hard Real-Time System, Time is fixed and we can’t Change any Moments of the Time of Processing. This means the CPU will process the data as we Enter the Data.
  • Soft Real-Time System: In the Soft Real-Time System, some Moments can be Changed. This means after giving the Command to the CPU, CPU Performs the Operation after a Microsecond.

5. Distributed Operating System: Distributed Means Data is Stored and Processed on Multiple Locations. When data is stored on Multiple Computers, those are placed in Different Locations. Distributed means In the Network, Network Collections of Computers are connected with Each other. Then if we want to Take Some Data From another Computer, Then we use the Distributed Processing System. And we can also Insert and Remove the Data from our Location to another Location. This Data is shared between many users. And we can also Access all the Input and Output Devices that are also accessed by Multiple Users.

6. Multiprocessing: In the Multi-Processing, there are two or more CPUs in a Single Operating System if one CPU will fail, then another CPU is used for providing backup to the first CPU. With the help of Multi-processing, we can Execute Many Jobs at a Time. All the Operations are divided into the Number of CPUs. if the first CPU Completed his Work before the Second CPU, then the Work of the Second CPU will be divided into the First and Second.

7. Parallel operating systems: These are used to interface multiple networked computers to complete tasks in parallel. Parallel operating systems are able to use software to manage all of the different resources of the computers running in parallel, such as memory, caches, storage space, and processing power. A parallel operating system works by dividing sets of calculations into smaller parts and distributing them between the machines on a network.

Process:

A process can be defined in any of the following ways

  • A process is a program in execution.
  • It is an asynchronous activity.
  • It is the entity to which processors are assigned.
  • It is the dispatchable unit.
  • It is the unit of work in a system.

A process is more than the program code. It also includes the current activity as represented by the following:

  • The current value of Program Counter (PC)
  • Contents of the processor’s registers
  • Value of the variables
  • The process stack contains temporary data such as subroutine parameters, return addresses, and temporary variables.
  • A data section that contains global variables.

Process in Memory:

Each process is represented as by a Process Control Block (PCB) also called a task control block.

PCB: A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor.

123

The PCB contains important information about the specific process including

  • The current state of the process i.e., whether it is ready, running, waiting, or whatever.
  • Unique identification of the process in order to track which is which information.
  • A pointer to the parent process.
  • Similarly, a pointer to the child process (if it exists).
  • The priority of the process (a part of CPU scheduling information).
  • Pointers to locate memory of processes.
  • A register saves the area.
  • The processor it is running on.

Process State Model

123

Process state: The process state consists of everything necessary to resume the process execution if it is somehow put aside temporarily.

The process state consists of at least the following: 

  • Code for the program.
  • Program’s static data.
  • Program’s dynamic data.
  • Program’s procedure call stack.
  • Contents of general-purpose registers.
  • Contents of the program counter (PC)
  • Contents of program status word (PSW).
  • Operating Systems resource in use.

 A process goes through a series of discrete process states.

  • New State: The process being created.
  • Running State: A process is said to be running if it has the CPU, that is, the process actually uses the CPU at that particular instant.
  • Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens.
  • Ready State: A process is said to be ready if it uses a CPU if one were available. A ready state process is runnable but temporarily stopped running to let another process run.
  • Terminated state: The process has finished execution.

Dispatcher:

  • It is the module that gives control of the CPU to the process selected by the short-term scheduler.
  • Functions of Dispatcher: Switching context, Switching to user mode, and  Jumping to the proper location in the user program to restart that program.

Thread:

A thread is a single sequence stream within a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In an operating system that has a thread facility, the basic unit of CPU utilization is a thread.

  • A thread can be in any of several states (Running, Blocked, Ready, or Terminated).
  • Each thread has its own stack.
  • A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads share with other threads their code section, data section, and OS resources also known as a task, such as open files and signals.

Multithreading:

An application typically is implemented as a separate process with several threads of control.

There are two types of threads.

  1. User threads: They are above the kernel and they are managed without kernel support. User-level threads implement in user-level libraries, rather than via systems calls, so thread switching does not need to call the operating system and cause an interrupt to the kernel. In fact, the kernel knows nothing about user-level threads and manages them as if they were single-threaded processes.
  2. Kernel threads: Kernel threads are supported and managed directly by the operating system. Instead of the thread table in each process, the kernel has a thread table that keeps track of all threads in the system.

Advantages of Thread

  • Thread minimizes context switching time.
  • The use of threads provides concurrency within a process.
  • Efficient communication.
  • Economy- It is more economical to create and context switch threads.
  • Utilization of multiprocessor architectures to a greater scale and efficiency.

Difference between Process and Thread:

\

Inter-Process Communication: 

  • Processes executing concurrently in the operating system may be either independent or cooperating processes.
  • A process is independent if it can’t affect or be affected by the other processes executing in the system.
  • Any process that shares data with other processes is a cooperating process.

There are two fundamental models of IPC:

  • Shared memory: In the shared memory model, a region of memory that is shared by the cooperating process is established. The process can then exchange information by reading and writing data to the shared region.
  • Message passing: In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.

CPU Scheduling: 

CPU Scheduling is the process by which an Operating System decides which programs get to use the CPU. CPU scheduling is the basis of MULTIPROGRAMMED operating systems.  By switching the CPU among processes, the operating system can make the computer more productive.

CPU Schedulers: Schedulers are special system software that handles process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run.

CPU Scheduling algorithms:

1. First Come First Serve (FCFS)

  • Jobs are executed on a first-come, first-serve basis.
  • Easy to understand and implement.
  • Poor performance as the average wait time is high.

2. Shortest Job First (SJF)

  • The best approach to minimize waiting time.
  • Impossible to implement
  • The processor should know in advance how much time the process will take.

3. Priority Based Scheduling

  • Each process is assigned a priority. The process with the highest priority is to be executed first and so on.
  • Processes with the same priority are executed on a first come first serve basis.
  • Priority can be decided based on memory requirements, time requirements, or any other resource requirement.

4. Round Robin Scheduling

  • Each process is provided a fixed time to execute called quantum.
  • Once a process is executed for a given time period. The process is preempted and another process executes for a given time period.
  • Context switching is used to save states of preempted processes.

5. Multi-Queue Scheduling

  • Multiple queues are maintained for processes.
  • Each queue can have its own scheduling algorithms.
  • Priorities are assigned to each queue.

Synchronization:

  • Currency arises in three different contexts:
    • Multiple applications – Multiple programs are allowed to dynamically share processing time.
    • Structured applications – Some applications can be effectively programmed as a set of concurrent processes.
    • Operating system structure – The OS themselves are implemented as a set of processes.
  • Concurrent processes (or threads) often need access to shared data and shared resources.
    • Processes use and update shared data such as shared variables, files, and databases.
  • Writing must be mutually exclusive to prevent a condition leading to inconsistent data views.
  • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.

Race Condition

  • The race condition is a situation where several processes access (read/write) shared data concurrently and the final value of the shared data depends upon which process finishes last
    • The actions performed by concurrent processes will then depend on the order in which their execution is interleaved.
  • To prevent race conditions, concurrent processes must be coordinated or synchronized.
    • It means that neither process will proceed beyond a certain point in the computation until both have reached their respective synchronization points.

Critical Section/Region

  1. Consider a system consisting of n processes all competing to use some shared data.
  2. Each process has a code segment, called a critical section, in which the shared data is accessed.

The Critical-Section Problem

  1. The critical-section problem is to design a protocol that the processes can cooperate with. The protocol must ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
  2. The critical section problem is to design a protocol that the processes can use so that their actions will not depend on the order in which their execution is interleaved (possibly on many processors).

Deadlock:

A deadlock situation can arise if the following four conditions hold simultaneously in a system.

  • Mutual Exclusion: Resources must be allocated to processes at any time in an exclusive manner and not on a shared basis for a deadlock to be possible. If another process requests that resource, the requesting process must be delayed until the resource has been released.
  • Hold and Wait Condition: Even if a process holds certain resources at any moment, it should be possible for it to request new ones. It should not give up (release) the already held resources to be able to request for new ones. If it is not true, a deadlock can never take place.
  • No Preemption Condition: Resources can’t be preempted. A resource can be released only voluntarily by the process holding it after that process has completed its task.
  • Circular Wait Condition: There must exist a set = {Po, P1, P2, … , Pn} of waiting for processes such that Po is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, … , Pn – 1 is waiting for a resource that is held by Pn and Pn is waiting for a resource that is held by P0.

Resource Allocation Graph: The resource allocation graph consists of a set of vertices V and a set of edges E. Set of vertices V is partitioned into two types

  1. P = {Pl, P2, … , Pn}, the set consisting of all the processes in the system.
  2. R = {Rl, R2, … , Rm}, the set consisting of all resource types in the system.
  • Directed Edge Pi → Pj is known as for request edge.
  • Directed Edge Pj → Pi is known as an assignment edge.

Resource Instance

  • One instance of resource type R1.
  • Two instances of resource type R2.
  • One instance of resource type R3.
  • Three instances of resource type R4

Process States

  • Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type Rl.
  • Process P2 is holding an instance of R1 and R2 is waiting for an instance of resource type R3.
  • Process P3 is holding an instance of R3.
  • Basic facts related to resource allocation graphs are given below

Note: If the graph consists of no cycle it means there is no deadlock in the system.

If the graph contains a cycle

  1. If only one instance per resource type, then deadlock.
  2. If several instances per resource type, then their mayor may not be deadlock.

Deadlock Handling Strategies

In general, there are four strategies for dealing with the deadlock problem:

  1. The Ostrich Approach: Just ignore the deadlock problem altogether.
  2. Deadlock Detection and Recovery: Detect deadlock and, when it occurs, take steps to recover.
  3. Deadlock Avoidance: Avoid deadlock through careful resource scheduling.
  4. Deadlock Prevention: Prevent deadlock by resource scheduling so as to negate at least one of the four conditions.

Deadlock Prevention

Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions can’t hold.

  • Elimination of “Mutual Exclusion” Condition
  • Elimination of “Hold and Wait” Condition
  • Elimination of “No-preemption” Condition
  • Elimination of “Circular Wait” Condition

Deadlock Avoidance

This approach to the deadlock problem anticipates deadlock before it actually occurs.

A deadlock avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular wait condition can never exist. The resource allocation state is defined by the number of available and allocated resources and the maximum demands of the processes.

Safe State: A state is safe if the system can allocate resources to each process and still avoid a deadlock.

123

A system is in a safe state if there exists a safe sequence of all processes. A deadlock state is an unsafe state. Not all unsafe states cause deadlocks. It is important to note that an unsafe state does not imply the existence or even the eventual existence of a deadlock. What an unsafe state does imply is simply that some unfortunate sequence of events might lead to a deadlock.

Address Binding: Binding of instructions and data to memory addresses.

  1. Compile-time: if process location is known then absolute code can be generated.
  2. Load time: The compiler generates relocatable code which is bound at load time.
  3. Execution time: If a process can be moved from one memory segment to another then binding must be delayed until run time.

Dynamic Loading:

  • Routine is not loaded until it is called.
  • Better memory-space utilization;
  • The unused routine is never loaded.
  • Useful when large amounts of code are needed to handle infrequently occurring cases.
  • No special support from the operating system is required; implemented through program design.

Dynamic Linking:

  • Linking postponed until execution time.
  • Small piece of code, stub, used to locate the appropriate memory-resident library routine.
  • Stub replaces itself with the address of the routine and executes the routine.
  • The operating system needed to check if the routine is in the processes’ memory address

Overlays: These techniques allow us to keep in memory only those instructions and data, which are required at a given time. The other instruction and data are loaded into the memory space occupied by the previous ones when they are needed.

Swapping: Consider an environment that supports multiprogramming using say Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished executing for one time quantum, it is swapped out of memory to a backing store.

The memory manager then picks up another process from the backing store and loads it into the memory occupied by the previous process. Then, the scheduler picks up another process and allocates the CPU to it.

Memory Management Techniques

Memory management is the functionality of an operating system that handles or manages primary memory. Memory management keeps track of each and every memory location whether it is allocated to some process or is free.

There are two ways for memory allocation as given below

Single Partition Allocation: The memory is divided into two parts. One to be used by us and the other one is for user programs. The code and data are protected from being modified by user programs using a base register.

Multiple Partition Allocation: The multiple partition allocation may be further classified as

Fixed Partition Scheme: Memory is divided into a number of fixed-size partitions. Then, each partition holds one process. This scheme supports multiprogramming as a number of processes may be brought into memory and the CPU can be switched from one process to another.

When a process arrives for execution, it is put into the input queue of the smallest partition, which is large enough to hold it.

Variable Partition Scheme: A block of available memory is designated as a hole at any time, a set of holes exists, which consists of holes of various sizes scattered throughout the memory.

When a process arrives and needs memory, this set of holes is searched for a hole that is large enough to hold the process. If the hole is too large, it is split into two parts. The unused part is added to the set of holes. All holes which are adjacent to each other are merged.

There are different ways of implementing the allocation of partitions from a list of free holes, such as:

  • first-fit: allocate the first hole that is big enough
  • best-fit: allocate the smallest hole that is small enough; the entire list of holes must be searched, unless it is ordered by size
  • next-fit: scan holes from the location of the last allocation and choose the next available block that is large enough (can be implemented using a circular linked list)

Instructions and data to memory addresses can be done in the following ways

  • Compile-time: When it is known at compile time where the process will reside, compile-time binding is used to generate the absolute code.
  • Load time:  When it is not known at compile time where the process will reside in memory, then the compiler generates re-locatable code.
  • Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed to be done at run time

Paging

It is a memory management technique, which allows the memory to be allocated to the process wherever it is available. Physical memory is divided into fixed-size blocks called frames. Logical memory is broken into blocks of the same size called pages. The backing store is also divided into the same size blocks.

When a process is to be executed its pages are loaded into available page frames. A frame is a collection of contiguous pages. Every logical address generated by the CPU is divided into two parts. The page number (P) and the page offset (d). The page number is used as an index to a page table.

Each entry in the page table contains the base address of the page in physical memory (f). The base address of the Pth entry is then combined with the offset (d) to give the actual address in memory.

Virtual Memory

Separation of user logical memory from physical memory. It is a technique to run the process size more than the main memory. Virtual memory is a memory management scheme that allows the execution of a partially loaded process.

Advantages of Virtual Memory

  • The advantages of virtual memory can be given as
  • Logical address space can, therefore, be much larger than physical address space.
  • Allows address spaces to be shared by several processes.
  • Less I/O is required to load or swap a process in memory, so each user can run faster.

Segmentation

  • The logical address is divided into blocks called segments i.e., logical address space is a collection of segments. Each segment has a name and a length.
  • The logical address consists of two things < segment number, offset>.
  • Segmentation is a memory-management scheme that supports the following user view of memory. All the locations within a segment are placed in a contiguous location in primary storage.

The file system consists of two parts:

  1. A collection of files
  2. A directory structure

The file management system can be implemented as one or more layers of the operating system.

The common responsibilities of the file management system include the following

  • Mapping of access requests from logical to physical file address space.
  • Transmission of file elements between main and secondary storage.
  • Management of the secondary storage such as keeping track of the status allocation and deallocation of space.
  • Support for protection and sharing of files and the recovery and possible restoration of the files after system crashes.

File Attributes

Each file is referred to by its name. The file is named for the convenience of the users and when a file is named, it becomes independent of the user and the process. Below are file attributes

  • Name
  • Type
  • Location
  • Size
  • Protection
  • Time and date

Disk Scheduling

One of the responsibilities of the OS is to use the hardware efficiently. For the disk drives, meeting this responsibility entails having fast access time and large disk bandwidth.

Access time has two major components

  • Seek time is the time for the disk arm to move the heads to the cylinder containing the desired sector.
  • The rotational latency is the additional time for the disk to rotate the desired sector to the disk head. It is not fixed, so we can take the average value.

Disk bandwidth is the total number of bytes transferred, divided by the total time between the first service and the completion of the last transfer.

FCFS Scheduling: This is also known as First In First Out (FIFO) simply queues processes in the order that they arrive in the ready queue.

The following features which FIFO schedules have.

  • First come first served scheduling.
  • Processes request sequentially.
  • Fair to all processes, but it generally does not provide the fastest service.
  • Consider a disk queue with requests for 110to blocks on the cylinder.

Shortest Seek Time First (SSTF) Scheduling: It selects the request with the minimum to seek time from the current head position. SSTF scheduling is a form of SJF scheduling that may cause starvation of some requests. It is not an optimal algorithm but its improvement over FCFS

SCAN Scheduling: In the SCAN algorithm, the disk arm starts at one end of the disk and moves toward the other end, servicing requests as it reaches each cylinder until it gets to the other end of the disk. At the other end, the direction of head movement is reversed and servicing continues. The head continuously scans back and forth across the disk. The SCAN algorithm is sometimes called the elevator algorithm, since the disk arm behaves just like an elevator in a building, first servicing all the requests going up and then reversing to service requests the other way.

C-SCAN Scheduling: Circular SCAN is a variant of SCAN, which is designed to provide a more uniform wait time. Like SCAN, C-scan moves the head from one end of the disk to the other, servicing requests along the way. When the head reaches the other end, however, it immediately returns to the beginning of the disk without servicing any requests on the return trip. The C-SCAN scheduling algorithm essentially Treats the cylinders as a circular list that wraps around from the final cylinder to the first one.

If you are aiming to crack SBI PO/IBPS 2022, then join the sure-shot success Batch Online Classroom Program where you will get:

    IBPS-SO :: IT Officer

    Our Apps Playstore
    POPULAR EXAMS
    SSC and Bank
    Other Exams
    GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303 help@byjusexamprep.com
    Home Practice Test Series Premium