Short Notes For Computer Networks

By Aditi Verma|Updated : August 27th, 2021

Here is the list of important formulas and topics in the computer network which can be used by a student while revision or at the time of preparation.                                                                                                  

Here is the list of important formula and topics in the computer network which can be used by a student while revision or at the time of preparation.


Physical Layer:

  • The physical layer coordinates the functions required to transmit a bit stream over a physical medium.
  • It deals with the mechanical and electrical specifications of interface and transmission medium.
  • It also defines the procedures and functions that physical devices and interfaces have to perform for transmission to occur.
  • Ethernet network interface card performs functions at both the physical layer and the data link layer.

Functions of Physical Layer: 

  • Physical layer defines characteristics of the interface between the devices and the transmission medium.
  • It defines the type of transmission medium.
  • It defines the transmission rate (the number of bits sent each second).
  • It performs synchronization of sender and receiver clocks.
  • It is concerned with the connection of devices to the medium.
    1. Point-to-point configuration: Two devices are connected together through the dedicated link.
    2. Multipoint configuration: A link is shared between several devices
  • It is concerned with the physical topology.
  • It defines the direction of transmission called transmission mode (simplex, half-duplex or duplex).
  • It transmits a bitstream over the communication channel.
  • Hardware Used: Repeater and Hub.
  • Data Unit: Bitstream

Data Link Layer:

  • The data link layer transforms the physical layer, a raw transmission facility, to a reliable link.
  • It is responsible for Node-to-Node delivery.
  • It makes the physical layer appear error-free to the Network layer.

Functions of Data Link Layer: 

  • Data Framing: Division of the stream of bits received from the network layer into manageable data units called frames. Segmentation of upper layer datagrams (packets) into frames.
  • Flow Control: It is to manage communication between a high-speed transmitter with a low-speed receiver.
  • Error Control: It provides a mechanism to detect and retransmit damaged or lost frames and to prevent duplication of frames. To achieve error control, a trailer is added at the end of a frame.
  • Access Control: Gives mechanism to determine which device has control over the link at any given time, if two or more devices are connected to the same link.
  • Physical Addressing: Adding a header to the frame to define the physical address of the sender (source address) and/or receiver (destination address) of the frame.
  • Hardware Used: Bridges and switches.
  • Data Unit: Frames
  • Protocol Used: Simplex protocol, stop and wait protocol, sliding window, HDLC (High-Level Data Link Control), SDLC, NDP, ISDN, ARP, PSL, OSPF, NDP.

Network Layer:

  • Network layer is responsible for a source to destination delivery of a packet possibly across multiple networks (links).
  • If the two systems are connected to the same link, there is usually no need for a network layer.
  • If the two systems are attached to different networks (links) with connecting devices between networks, there is often a need for the network layer to accomplish source to destination delivery.

Functions of the Network Layer: 

  • Logical Addressing: If packet passes the network boundary, we need logical addressing system to distinguish the source and destination systems.
  • Routing: Independent networks or links are connected together with the help of routers or gateways. Routers route the packets to their final destination. Network layer is responsible for providing routing mechanism.
  • Hardware Used: Routers
  • Data Units: Packets
  • Protocols Used: IP (Internet, Protocol), NAT (Network Address Translation), ARP (Address Resolution Protocol), ICMP (Internet control Message Protocol), BGP (Border Gateway Protocol), RARP (Reverse Address Resolution Protocol), DHCP (Dynamic Host Configuration Protocol), BOOTP, OSPF.

Transport Layer:

  • The transport layer is responsible for- source to destination (end-to-end) delivery of the entire message.
  • Network layer does not recognize any relationship between the packets delivered.
  • Network layer treats each packet independently, as though each packet belonging to a separate message, whether or not it does. The transport layer ensures that the whole message arrives intact and in order.

Functions of Transport Layer: 

  • Service Point Addressing: The transport layer header must include a type of address called service point address (or part address).
  • Segmentation and Reassembly: A message is divided into transmittable segments, each segment containing a sequence number.
  • Flow Control Flow: control at this layer is performed end to end rather than across a single link.
  • Error Control: This layer performs an end to end error control by ensuring that the entire message at the receiving transport layer without error (damage, loss or duplication). Error correction is usually achieved through retransmission.
  • Connection Control: Transport layer can deliver the segments using either connection oriented or connection less approach. Hardware Used: Transport Gateway Data Unit: Segments Protocol Used: TCP (Transmission Control Protocol) for connection oriented approach and UDP (User Datagram Protocol) for connection less approach.

Session Layer:

  • The session layer is the network dialog controller.
  • It establishes, maintains and synchronizes the interaction between communicating systems.
  • It also plays an important role in keeping applications data separate.

Functions of Session Layer: 

  • Dialog Control: Session layer allows the communication between two processes to take place either in half-duplex or full-duplex. It allows applications functioning on devices to establish, manage and terminate a dialogue through a network.
  • Synchronization: The session layer allows a process to add checkpoints (synchronization points) into a stream of data.

Presentation Layer:

  • It is responsible for how an application formats data to be sent out onto the network.
  • It basically allows an application to read and understand the message.

Functions of Presentation Layer: 

  • Translation: Different systems use different encoding system, so the presentation layer provides interoperability between these different encoding methods. This layer at the sender end changes the information from sender dependent format into a common format. The presentation layer at receiver end changes the common format into its receiver dependent format.
  • Encryption and Decryption: This layer provides encryption and decryption mechanism to assure privacy to carry sensitive information. Encryption means sender transforms the original information to another form and at the receiver end, decryption mechanism reverses the new form of data into its original form.
  • Compression: This layer uses compression mechanism to reduce the number of bits to be transmitted. Data compression becomes important in the transmission of multimedia such as text, audio and video.

Application Layer:

  • This layer enables the user, whether human or software, to access the network.
  • It provides user interfaces and support for services such as electronic mail, remote file access and transfer shared database management and other types of distributed information services.
  • Examples: Telnet, FTP, etc

Functions of Application Layer: 

  • Network Virtual Terminal: It is a software version of a physical terminal and allows a user to logon to a remote host. To do so, the application creates a software emulation of a terminal at the remote host.
  • File Transfer, Access and Management: It allows a user to access files, retrieve files, manage files or control files in a remote computer.
  • Mail Services: It provides Electronic messaging (e-mail storage and forwarding).
  • Directory Services: It provides distributed database sources and access for global information about various objects and services.

IP Addressing:

IPv4 Header


IPv6 Header


Classes and Subnetting

There are currently five different field length pattern in use, each defining a class of address.

An IP address is 32 bit long. One portion of the address indicates a network (Net ID) and the other portion indicates the host (or router) on the network (i.e., Host ID).

To reach a host on the Internet, we must first reach the network, using the first portion of the address (Net ID). Then, we must reach the host itself, using the 2nd portion (Host ID).

The further division a network into smaller networks called subnetworks.


For Class A: First bit of Net ID should be 0 like in following pattern

01111011 . 10001111 . 1111100 . 11001111

For Class B: First 2 bits of Net ID should be 1 and 0 respective, as in below

pattern 10011101 . 10001111 . 11111100 . 11001111

For Class C: First 3 bits Net ID should be 1, 1 and 0 respectively, as follows

11011101 . 10001111 . 11111100 . 11001111

For Class D: First 4 bits should be 1110 respectively, as in pattern

11101011 . 10001111 . 11111100 . 11001111

For Class E: First 4 bits should be 1111 respectively, like

11110101 . 10001111 . 11111100 . 11001111

Class Ranges of Internet Address in Dotted Decimal Format


Three Levels of Hierarchy: Adding subnetworks creates an intermediate level of hierarchy in the IP addressing system. Now, we have three levels: net ID; subnet ID and host ID. e.g.,



Classless Addressing Scheme:

  • No classes for division of IP addresses
  • Notation: x.y.z.w/n,  where n denotes the mask value inside the given network.
  • No of host id= 232-n

Rules: 1. Addresses in blocks are continous.

  1. The first address of a block should be exactly divisible by no of addresses in a block.

Flow and Error Control techniques: 


  • The procedures to be followed by the transmitter sender and receiver for efficient transmission and reception is called as flow control.
  • Two approaches :

Stop and wait — Error control (stop and wait ARQ)

Sliding Window Protocol (SWP)

Stop and Wait ARQ


  • Only one frame at a time on the link poor utilization poor efficiency
  • Efficiency =byjusexamprep


  • It is an example of closed ioop control protocol.
  • Positive ACKs are numbered in Stop and Wait, Negative ACKs are not numbered
  • Throughput = 1 window/RTT = 1 Packet/RTT
  • It is a special category of SWP of window size = 1

Go-back N (GBN) ARQ

  • Receiver window size (RWS) = 1
  • Sender window size (SWS) = 2K- 1 where, K is number of bits received for window size in the header.
  • Efficiency = (2K -1) xbyjusexamprep
  • Ws+WR ASN (Available Sequence Number)
  • Uses cumulative/Piggybacking acknowledgments
  • GBN is called as "conservating protocol".

Selective Repeat ARQ (SR)

  • Ws+WR byjusexamprepmaximum ASN (2k)
  • It uses piggybacking/ cumulative/ Independent acknowledgments.
  • It accepts out of order of packets.
  • Efficiency = (2K-1) xbyjusexamprep
  • With piggyback throughput =byjusexamprep
  • Round Trip Time (RET): It is minimum acknowledgment waiting time.

RTT = 2 x Propagation delay

  • Time Out: It is the maximum acknowledgment waiting time

Error Control Technique:

Error Detection Code:


  • There are two algorithms involved in this process, checksum generator at sender end and checksum checker at receiver end.
  • The sender follows these steps
    • The data unit is divided into k sections each of n bits.
    • All sections are added together using 1's complement to get the sum.
    • The sum is complemented and becomes the checksum.
    • The checksum is sent with the data.
  • The receiver follows these steps
    • The received unit is divided into k sections each of n bits.
    • All sections are added together using 1's complement to get the sum.
    • The sum is complemented.
    • If the result is zero, the data are accepted, otherwise they are rejected.

Limitation of checksum:

  • It is not possible to detect the vertical error from the data which is received at receivers end.
  • If noise modify the data in such a way that vertically placed bits can cancel the change made to them then calculated checksum will always be same as received checksum. Such errors cannot be detected and they are known as vertical errors.

Cyclic Redundancy Check (CRC):

  • CRC is based on binary division.
  • A sequence of redundant bits called CRC or the CRC remainder is appended to the end of a data unit, so that the resulting data unit becomes exactly divisible by a second, predetermined binary number.
  • At its destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the data unit is assumed to be intact and therefore is accepted.

Selection Criteria for CRC generator:

  • Generator shiould be of more than 1 bit.
  • when x is part of our generator, than it will detect all the errors. So for a generator to detect all type of errors, it should not contain x.
  • if generator contains x+1, then all the odd bit errors are detected.
  • A good generator always contain x other it will bw multiple of x.
  • CRC 32 will always detect all type of errors in the network. it is considered as ideal network detector.

Error Correction: Hamming code is a set of error-correction codes that can be used to detect and correct the errors that can occur when the data is moved or stored from the sender to the receiver.


  • Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
  • All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
  • All the other bit positions are marked as data bits.
  • Each data bit is included in a unique set of parity bits, as determined its bit position in binary form.
    Parity bit 1 covers all the bits positions whose binary representation includes a 1 in the least significant position (1,3, 5, 7, 9, 11, etc).
    b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
    c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in the third position from the least significant bit (4–7, 12–15, 20–23, etc).
    d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc).
    e. In general each parity bit covers all bits where the bitwise AND of the parity position and the bit position is non-zero.
  • Since we check for even parity set a parity bit to 1 if the total number of ones in the positions it checks is odd.
  • Set a parity bit to 0 if the total number of ones in the positions it checks is ev 


IEEE 802.3 Frame Format: Maximum 802.3 frame size is 1518 bytes and the minimum size is 64 bytes.


  • Preamble field: Establishes bit synchronization and transceiver conditions so that the PLS circuitry synchs in with the received frame timing.
  • Start Frame Delimiter: Sequence 10101011 in a separate field..
  • Destination address: Hardware address (MAC address) of the destination station (usually 48 bits i.e. 6 bytes).
  • Source address: Hardware address of the source station (must be of the same length as the destination address, the 802.3 standard allows for 2 or 6 byte addresses).
  • Length: Specifies the length of the data segment, actually the number of LLC data bytes.
  • Pad: Zeros added to the data field to 'Pad out' a short data field to 46 bytes.
  • Data: Actual data which is allowed anywhere between 46 to 1500 bytes within one frame.
  • FCS: Frame Check Sequence to detect errors that occur during transmission.

Propagation Delay: Time taken for a signal to travel from the transmitter to the receiver

  • Speed of light is the fastest a signal will propagate
    • 3 × 108 m/sec through space
    • 2 × 108 m/sec through copper

Transmission Delay (Time): Time taken to put the bits on the transmission media. Transmission speed of 2Mbps means 2 × 106 bits can be transmitted in 1 second

Processing Delay: Time taken to execute protocols. (check for errors and send Acks etc.)

Queuing Delay: Only in packet switched networks.

  • Time spent waiting in buffer for transmission
  • Increases as load on network increases

Round Trip Delay: Round trip delay is defined as the time between the first bit of the message being put onto the transmission medium, and the last bit the acknowledgement being received back by the transmitter. It is the sum of the all the delays detailed above. The round trip delay is a critical factor in the performance of packet switched protocols and networks. Indeed, it has been stated that a good algorithm for estimating the round trip delay is at the heart of a good packet switch protocol.

Channel efficiency (utilization  ) =byjusexamprep


 If length of packet increases i also increases

f: Frame length v Speed of signal propagation e: Contention slots/frame

L: Cable Length (d)

B: Network bandwidth

a: Propagation delay

2T: Duration of each slot = 2 x propagation delay

A: Slot probability = (1/e)

  • The probability of one station succeeding in putting its traffic on a network of `ni stations is given as nP(1 – P)n-1
  • CSMA/CD for ethernet:

Utilization byjusexamprep 


Wi-Fi Concepts: There are two general types of Wi-Fi transmission: DCF (Distributed Coordination Function) and PCF (Point Coordination Function). DCF is Ethernet in the air. It employs a very similar packet structure, and many of the same concepts. There are two problems that make wireless different then wired.

  • The hidden substation problem.
  • High error rate.

These problems demand that a DCF Wi-Fi be a CSMA/CA network (Collision Avoidance) rather than a CSMA/CD network (Collision Detect). The results are the following protocol elements,

  • Positive Acknowledgement. Every packet sent is positively acknowledged by the receiver. The next packet is not sent until receiving a positive acknowledgement for the previous packet.
  • Channel clearing. A transmission begins with a RTS (Request to Send) and the destination or receiver responds with a CTS (Clear to Send). Then the data packets flow. For the channel is cleared by these two messages.
  • Channel reservation: Each packet has a NAV (Network Allocation Vector) containing a number X. The channel is reserved to the correspondents (the sender and receiver of this packet) for an additional X milliseconds after this packet. Once you have the channel, you can hold it with the NAV. The last ACK contains NAV z

Transport Layer

There are two transport layer protocols as given below.

UDP (User Datagram Protocol)

UDP is a connection less protocol. UDP provides a way for an application to send encapsulate IP datagram and send them without having to establish a connection.

  • Datagram oriented
  • unreliable, connectionless
  • simple
  • unicast and multicast
  • Useful only for few applications, e.g., multimedia applications
  • Used a lot for services: Network management (SNMP), routing (RIP), naming (DNS), etc.

UDP transmitted segments consisting of an 8 byte header followed by the payload. The two parts serve to identify the end points within the source and destinations machine. When UDP packets arrives, its payload is handed to the process attached to the destination ports.

Source Port Address (16 bit)

Destination Port Address (16 bit)

Total Header Length (16 bit)

CheckSum (16 bit) [Optional]

TCP (Transmission Control Protocol)

TCP provides full transport layer services to applications. TCP is reliable stream transport port-to-port protocol. The term stream in this context, means connection-oriented, a connection must be established between both ends of transmission before either may transmit data. By creating this connection, TCP generates a virtual circuit between sender and receiver that is active for the duration of transmission.

TCP is a reliable, point-to-point, connection-oriented, full-duplex protocol.


 State Transition Diagram at Transport Layer:


Congestion Control

Traffic Shaping

  • Another method to congestion control is to shape the traffic before it enters the network.
  • It controls the rate at which packets are sent (not just how many). Used in ATM and integrated services networks.
  • At connections setup time, the sender and carrier negotiate a traffic pattern (shape).
  • Two traffic shaping algorithms are as follows
    1. Leaky Bucket
    2. Token Bucket

The Leaky Bucket (LB) Algorithm

The Leaky Bucket algorithm used to control rate in the network. It is implemented as single-server queue with constant service time. If the buffer (bucket) overflows, then packets are discarded.


The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input. Does nothing when input is idle.

When packets are of the same size (as in ATM cells), the host should inject one packet per clock tick onto the network. But for variable length packets, it is better to allow a fixed number of bytes per tick.

Token Bucket (TB) Algorithm

In contrast to LB, the token bucket algorithm, allows the output rate to vary depending on the size of the burst.



According to Token Bucket Algorithm:

                                                     C+ρs= Ms

          where,  C= Capacity of the bucket

                      ρ= Token rate

                      s= Bursty traffic time in seconds

                     M= Output Rate 

Congestion control at transport layer:

There are three steps to control the congestion:

  • Slow start algorithm: In this phase, size of sender's window will increase exponentially until it became equal to the threshold value of the congestion network. afterwards, Congestion avoidance will be used.
  • Congestion Avoidance Algorithm: Here increase in sender window size is additive i.e., increase in window size is based on RTT is also known as linear increas and additive increase.
  • Congestion detection algorithm: It is also known as multiplicative decrease algorithm as window size is getting reduced. it works as follows:


The graph can be shown as:


Routing Protocols

Distance Vector Routing:

  • In this routing scheme, each router periodically shares its knowledge about the entire network with its neighbours.
  • Each router has a table with information about network. These tables are updated by exchanging information with the immediate neighbours.
  • It is also known as Belman-Ford or Ford-Fulkerson Algorithm.
  • It is used in the original ARPANET, and in the Internet as RIP.
  • Neighboring nodes in the subnet exchange their tables periodically to update each other on the state of the subnet (which makes this a dynamic algorithm). If a neighbor claims to have a path to a node which is shorter than your path, you start using that neighbor as the route to that node.
  • Distance vector protocols (a vector contains both distance and direction), such as RIP, determine the path to remote networks using hop count as the metric. A hop count is defined as the number of times a packet needs to pass through a router to reach a remote destination.
  • For IP RIP, the maximum hop is 15. A hop count of 16 indicates an unreachable network. Two versions of RIP exist: version 1 and version 2.
  • IGRP is another example of a distance vector protocol with a higher hop count of 255 hops.
  • Periodic updates are sent at a set interval. For IP RIP, this interval is 30 seconds.
  • Updates are sent to the broadcast address Only devices running routing algorithms listen to these updates.
  • When an update is sent, the entire routing table is sent.

Link State Routing:

  • The following sequence of steps can be executed in the Link State Routing.
  • The basis of this advertising is a short packed called a Link State Packet (LSP).
  • OSPF (Open shortest path first) and IS-IS are examples of Link state routing.
  • Link State Packet(LSP) contains the following information:
    1. The ID of the node that created the LSP;
    2. A list of directly connected neighbors of that node, with the cost of the link to each one;
    3. A sequence number;
    4. A time to live(TTL) for this packet.
  • When a router floods the network with information about its neighbourhood, it is said to be advertising.
    1. Discover your neighbors
    2. Measure delay to your neighbors
    3. Bundle all the information about your neighbors together
    4. Send this information to all other routers in the subnet
    5. Compute the shortest path to every router with the information you receive
    6. Each router finds out its own shortest paths to the other routers by using Dijkstra's algorithm.
  • In link state routing, each router shares its knowledge of its neighbourhood with all routers in the network.
  • Link-state protocols implement an algorithm called the shortest path first (SPF, also known as Dijkstra's Algorithm) to determine the path to a remote destination.
  • There is no hop count limit. (For an IP datagram, the maximum time to live ensures that loops are avoided.)
  • Only when changes occur, It sends all summary information every 30 minutes by default. Only devices running routing algorithms listen to these updates. Updates are sent to a multicast address.
  • Updates are faster and convergence times are reduced. Higher CPU and memory requirements to maintain link-state databases.
  • Link-state protocols maintain three separate tables:
    • Neighbor table: It contains a list of all neighbors, and the interface each neighbor is connected off of. Neighbors are formed by sending Hello packets.
    • Topology table (Link- State table) : It contains  a map of all links within an area, including each link’s status.
    • Routing table : It contains the best routes to each particular destination

Network Security:




  • Public Key: Keys which can be transmitted on the channel
  • Private Key: keys Which cannot be transmittd to the channel.


  • Symmetric Key Cryptography:When a single key is used for both encryption and decryption of the data. Example: Diffie Hellman KEy Exchange Algorithm.
  • Asymmetric key Cryptography: when different keys are used for both encryption and decryption of the data. Example: RSA Algorithm

Diffie HellMan Key Exchange Algorithm:

  • Choose two prime NO g and n (and) x and y be the secret of both senders and reciever respectively.
  • calculate R1= gxmodn at reciever's end.
  • Calculate R2= gymodn at sender end.
  • Both will exchange the keys at their end with each other. then, new calculated key will be

                     {KAB= gxymodn}

RSA Algorithm

  • It comes under asymmetric key cryptography.
  • the step for generation of keys are:
    • Choose two prime no. P and Q.
    • Calculate n= P*Q
    • Calculate Eulers totient function, 
    • Choose (d,e) such that 
    • Receiver will send (e, n) to the sender which is known as reciever's public key.
    • Now client will encrypt the data with the receiver's public key as C= Pemodn
    • Cipher text C is placed on the channel which is decrypted by the receiver by using his own private key. 
    • The Plain text P will be P= Cdmodn

Digital Signature

It is done to provide both

  • Confidentiality to data
  • Authentication of user




Team gradeup

Sahi Prep Hai Toh Life Set Hai


write a comment

Follow us for latest updates