Digital Communication : Communication system

By Vishnu Pratap Singh|Updated : February 28th, 2022

The shape of the waveform is affected by two mechanisms:

  • As all the transmission lines and circuits have some non-ideal transfer function, there is a distorting effect on the ideal pulse.
  • Unwanted electrical noise or other interference further distorts the pulse waveform.                                                                                                                 

Both of these mechanisms cause the pulse shape to degrade as a function of distance. During the time that the transmitted pulse can still be reliably identified, the pulse is thus regenerated. The circuit that perform this function at regular intervals along a transmission system are called regenerative repeaters.

  • Digital circuits are less subject to distortion and interference than analog circuits.
  • Digital circuits are more reliable and can be produced at lower cost than analog circuits. Also, digital hardware lends itself to more flexible implementation than analog hardware.
  • Digital techniques lend themselves naturally to signal processing functions that protect against interference and jamming.

Elements of Digital Communication:

  • The source output may be either an analog signal (such as an audio or video signal) or a digital signal (such as the output of a teletype machine) that is discrete in time and has a finite number of output characters.
  • In a digital communication system, the messages produced by the source are converted into a sequence of binary digits. The process of efficiently converting the output of either an analog or digital source into a sequence of binary digits is called source encoding or data compression.
  • The sequence of binary digits from the source encoder, which we call the information sequence, is passed to the channel encoder. The purpose of the channel encoder is to introduce, in a controlled manner, some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encountered in the transmission of the signal through the channel. This increase the reliability of the received data and improves the fidelity of the received signal.
  • The binary sequence at the output of the channel encoder is passed to the digital modulator, which serves as the interface to the communication channel. Since nearly all the communication channels encountered in practice are capable of transmitting electrical signals (waveforms), the primary purpose of the digital modulator is to map the binary information sequence into signal waveforms.
  • Let us suppose that the coded information sequence is to be transmitted one bit at a time at some uniform rate R bits per second (bits/s). The digital modulator may simply map the binary digit 0 into a waveform so(t) and the binary digit 1 into a waveform s, (t). In this manner, each bit from the channel encoder is transmitted separately. We call this binary modulation.
  • Alternatively, the modulator may transmit 6 coded information bits at a time by using M = 2h distinct waveforms so(t), i = 0, 1, ..., M - 1, one waveform for each of the 26 possible b-bit sequences. We call this M-ary modulation (M > 2).
  • Note that a new b-bit sequence enters the modulator every b/R seconds. Hence, when the channel bit rate R is fixed, the amount of time available to transmit one of the M waveforms corresponding to a b-bit sequence is b times the time period in a system that uses binary modulation.
  • At the receiving end of a digital communication system, the digital demodulator processes the channel-corrupted transmitted waveform and reduces the waveforms to a sequence of numbers that represent estimates of the transmitted data symbols (binary or M-ary).
  • This sequence of numbers is passed to the channel decoder, which attempts to reconstruct the original information sequence from knowledge of the code used by the channel encoder and the redundancy contained in the received data.

  The source decoder accepts the output sequence from the channel decoder and, from knowledge of the source encoding method used, attempts to reconstruct the original signal from the source.  

Pulse Code Modulation (PCM)

  • The simplest form of pulse digital modulation is called Pulse Code Modulations (PCM), wherein each analogue sample value is quantized into a discrete value for representation as a digital code word.
  • In this modulation scheme, we first sampled the analog signal then quantize it to convert it into levels and then encode it and then send it in the form of digital codes.
  • If there is n bit quantizer and sampling rate is f S' then bit rate will be Rb = n.fs bits/sec
  • The essential operations in the receiver are regeneration of impaired signals, decoding and demodulation of the train of quantized samples.
  • Bandwidth requirement is minimum R/2 and maximum Rb.

Quantization:

  • Quantization refers to the use of a finite set of amplitude levels and the selections of a level nearest to a particular sample value of the message signal as the representation for it.
  • Basically, the quantizers are of two types:
    • Uniform quantizer
    • Non-uniform quantizer

 

Uniform Quantizer:

  • A uniform quantizer is that type of quantizer in which the step size remains same throughout the input range.
  • This is the process of setting the sample amplitude, which can be continuously variable to a discrete value.
  • We assume that the amplitude of the signal m(t) is confined to the range (-mp, +mp ). This range (2mp) is divided into L levels, each of step size δ = 2 mp / L.
  • A sample amplitude value is approximated by the midpoint of the interval in which it lies.
  •  If we are using uniform quantizer with step size δ or Δ,
  • Then maximum quantization error will be ± Δ /2
  • Normally used uniform quantizers are mid tread type and midrise type.

Non-uniform Quantizer:

  • A non-uniform quantizer is that type of quantizer in which the step size varies according to the input values. Because of quantization, inherent errors are introduced in the signal. This error is called quantization error.
  • Signal to quantization noise ratio for PCM for sinusoidal input is (for uniform quantizer)

where, n is bit of quantizer.

  • As bit of quantizer increases SQNR increases but at the same time bandwidth requirement for transmission also increases.

Companding Process :- To compress the signal at transmitter and expand the signal at receiver is combinely called as companding. Compression and expansion is done by passing the signal through the amplifier having non-linear transfer characteristics.

There are two types of companding techniques μ-law companding and A-law companding.

μ-law Companding:

  • The compression characteristics is continuous.
  • It uses a logarithmic compression curve which is ideal in the sense that quantization intervals and hence quantization noise is directly proportional to signal level

where, μmax = 225

A-law Companding:

  • The compression characteristics is piecewise, made up of a linear segment for low level inputs and a logarithmic segment for high level inputs.
  • This is the ITU-T standard.
  • It is very similar to the μ-Law coding. It is represented by straight line segments to facilitate digital companding.

where, Amax = 87.6

  • The signal to noise ratio of PCM remains almost constant with companding.
  • Companding is done to avoid non-linear distortion of channel.
  • Companding is widely used in telephone systems to reduce non-linear distortion and also to compensate for signal level difference between soft and loud talkers.

Differential Pulse Code Modulation (DPCM)

  • Differential pulse-code modulation (DPCM) is a signal encoder that uses the baseline of pulse-code modulation (PCM) but adds some functionalities based on the prediction of the samples of the signal.
  • The input can be an analog signal or a digital signal.
  • If the input is a continuous-time analog signal, it needs to be sampled first so that a discrete-time signal is the input to the DPCM encoder.
  • When the samples of a signal are highly correlated, then we go for DPCM in order to save bandwidth or using the same bandwidth at higher data rate.
  • In DPCM, principle used is prediction.

The two main differential coding schemes are:

  • Delta Modulation
  • Differential PCM and Adaptive Differential PCM (ADPCM)

Delta Modulation:

  • Delta modulation converts an analogue signal, normally voice, into a digital signal.
  • Delta modulation is a special case of differential pulse code modulation.
  • It is called as one bit DPCM as it transmits only one bit per sample.
  • In delta modulation, problem of slope overload occurs if input is changing very fast that is:
  • To overcome slope overload error, we choose optimum size of Δ such that.

where, Δopt = Optimum size of Δ

 Hunting is the second problem, this occur when message is almost constant.

Adaptive Delta Modulation:- The adaptive delta modulation is scheme in which, step choose step size in accordance with message signal’s sampled value to overcome slope overload error and hunting. If the message varies at a high rate, then step size is high and if message is varying slowely, then step size is small.

 

Digital Modulation Schemes

This is possible to transmit the analog signal i.e., speech, video etc, in digital format. Some digital modulation schemes are given below

  • Digital Carrier Modulation: Commonly used digital modulation schemes are Amplitude Shift Keying (ASK), Frequency Shift Keying (FSK) and Phase Shift Keying (PSK).
  • Amplitude Shift Keying (ASK): The amplitude of a high-frequency carrier is varied in accordance with digital data (0 or 1).

S(t) = Ac cos 2πfct; 0 ≤ t ≤ Tb

= 0; otherwise

Bandwidth = 2 × 1/Tb

= 2 × bit rate

  • For digital input 1 amplitude level is high and for digital input 0 amplitude level is low.
  • Signaling used is on-off signaling.

Demodulation of ASK:

  • For binary digit 1, Ac cos 2π fct × Ac cos 2πfct = (A2/2)[1 + cos 4πfct]
  • Output of LPF = (A2/2)
  • For binary digit 0 output of LPF = 0
  • In ASK, probability of error (Pe) is high.
  • In ASK, SNR is less.

Phase Shift Keying (PSK):

In phase shift, keying phase of high-frequency carrier is varied in accordance with digital data 1 or 0.

  • NRZ signaling is used.

S(t) = Ac cos 2πfct for bit 1

= – Ac cos 2pfct for bit 0

A frequency of the carrier must be a multiple of a bit rate.

Tb = n/fc

Fc = nrb

  • In case of PSK, a probability of error is less.
  • In case of PSK, SNR is high.
  • Mainly used a technique in wireless transmission.

Frequency Shift Keying (FSK):

  • In frequency shift keying, a frequency of the carrier is varied in accordance with digital data (1 or 0).
  • For digital data 1 we use frequency f1 and for digital data 0 we use frequency f2.
  • NRZ signaling is used here
  • VCO The schematic diagram of VCO is given below

Bandwidth = 2Δf + 2fm

Bandwidth = f1 + (1/Tb) – f2 + (1/Tb)

= f1 – f2 + (2/Tb); f1 – f2 = 2Δf

Key Points

  • In case of FSK, Pe is less but SNR is high.
  • Multiplexing is difficult in FSK.

Differential Phase Shift Keying (DPSK): In PSK it needs a complicated synchronising circuit at the receiver, this disadvantage of PSK is removed in DPSK.

A cos ω0t = ± A cos ω0t

Note: Advantage of DPSK over PSK is, DPSK does not require a coherent carrier for demodulation.

 

Probability of Error The Probability of error for different digital modulation schemes is given below

Probability of Error Different Types of Digital Modulation Schemes

  • In case of FSK f1 and f2 are choose such that f1 = mfs and f­2 = kfs′ where m and are integers.
  • Bandwidth efficiency for PSK is:

Noise: In electrical-terms, noise may be defined as an unwanted form of energy which tend to interfere with the proper reception and reproduction of transmitted signals. Conveniently noise can be classified as:

  1. External noise
  2. Internal noise

Noise Analysis in Communication System: The noise analysis can be done in communication system by calculating the following terms

Figure of Merit: Noise analysis in Continuous Wave (CW) modulation is carried out in the form of a parameter known as figure of merit denoted by γ. This parameter figure of merit γ is the ratio of output signal-to-noise ratio to the input signal-to-noise ratio of the receiver.

Signal to Noise Ratio (SNR): It is defined as ratio of signal power to noise power.

In-phase noise component:

Where  is the Hilbert transform of n(t)

Quadrature noise component

where, n (t) represents the filtered noise

Total noise power (N) = White noise power spectrum density x Bandwidth

or

N= (n/2) * Bandwidth

Thus, the noise has a gaussian distribution.

  • The effect of channel noise may be obtained by simple addition of signal x(t) and noise n (t).
  • The noise performance depends on the relative magnitudes of the signal and noise.

 

Multiplexing

  • Multiplexing is a technique in which several message signals are combined into a composite signal for transmission over a common channel. In order to transmit a number of these signals over the same channel, the signal must be kept apart so that they do not interfere with each other, and hence they can be separated easily at the receiver end.

image001

  • Digital radio has developed ways in which more than one conversation can be accommodated (multiplexed) inside the same physical RF channel. There are three common ways of achieving this:
    • Frequency Division Multiple Access (FDMA)
    • Time Division Multiple Access (TDMA)
    • Code Divisional Multiple Access (CDMA)

 Frequency Division Multiplexing

  • In FDMA, we divide whole bandwidth of the channel in small segments and allot it to different users so that they can access channel at the same time by using their allotted bandwidth.

image002

 Time Division Multiplexing

  • In TDMA, whole time slot is divided among different users so that at a time only one user is accessing the channel.

image003

Key Points:

  • Bandwidth requirement in TDMA and FDMA is almost same for the same number of users.
  • The TDMA system can be used to multiplex analog or digital signals, however, it is more suitable for the digital signal multiplexing.
  • The communication channel over which the TDMA signal is travelling should ideally have an infinite bandwidth in order to avoid the signal distortion. Such channels are known as the band-limited channels.

 Code Division Multiplexing (CDMA)

  • Instead of splitting the RF channel in to sub-channels or time slots, each slot has a unique code. Unlike FDMA, the transmitted RF frequency is the same in each slot, and unlike TDMA, the slots are transmitted simultaneously. In the diagram, the channel is split in to four code slots. Each slot is still capable of carrying a separate conversation because the receiver only reconstructs information sent from a transmitter with the same code.

Introduction to Coding Technique:

Codes can either correct or merely detect errors depends the redundancy contained in the code. Codes that can detect errors are called error-detecting codes, and codes that can correct errors are known as error-correcting codes. There are many different error control codes. These are divided into two types of codes i.e.,  block codes and convolutional codes. These two codes are described by binary codes which consist of only two elements i.e., 0.1. The set {0, 1} is denoted K.

Channel Coding:

A basic block diagram for the channel coding is shown in Figure below. The binary data sequence at the input terminal of the channel encoder may be the output of a source encoder.By adding extra bits into the message bits by the channel encoder for detection and correction of the bit errors in the oringinal input data is done at the receiver.The added extra bits leads to symmetric redundancy.  The channel decode at the receiver side remove this symmetric reduncdancy by providing the actual transmitted data. The main objective of channel encoder and decoder is to reduce the channel noise effect.

byjusexamprep

Linear Block Codes :

Binary Field :

The binary field has two operations, addition and multiplication such that the results of all operations are in K.The set K = {0, 1) is a binary field. The rules of addition and multiplication are as follows:

Addition:

0 ⊕ 0 = 0        1 ⊕ 1 = 0      0 ⊕  1 = 1 ⊕  0 = 1  

Multiplication:

0 ⋅ 0 = 0 1 ⋅ 1 = 1        0 ⋅ 1 = 1 ⋅ 0 = 0

Linear Codes:

Let a = (a1, a2, ... ,an), and b = (b1, b2, . . . ,bn) be two code words in a code C. The sum of a and b, denoted by a ⊕ b, is defined by (a1 ⊕ b1, a2 ⊕ b2, . . . , an ⊕ bn). A code C is called linear if the sum of two code words is also a code word in C. A linear code C must contain the zero code word o = (0, 0,.. . ,0), since a ⊕ a = 0.

Hamming Weight and Distance :

Let C be a code word with length n. The Hamming weight of C denoted by w(c), is the number of 1’s in C.

Let a and b be code words of length n. The hamming distance between a and b is represented by d (a,b) means the number of position where a , b are differ. Thus, the Hamming weight of a code word c is the Hamming distance between c and 0, that is

w(c) = d(c, 0)

Similarly, the Hamming distance can be written in terms of Hamming weight as

d(a, b) = w(a ⊕ b)    

. Minimum Distance:

The minimum distance dmin of a linear code C is defined as the smallest Hamming distance between any pair of code words in C.

From the closure property of linear codes–that is, the sum (module 2) of two code words is also a code word.

Theorem 1  :

“The minimum distance dmin of a linear code C is  defined as the smallest Hamming weight of the non-zero code word in the linear code C.”

. Error Detection and Correction Capabilities:

The minimum distance dmin of a linear code C is an important parameter of C. It determines the error detection and correction capabilities of C. This is stated in the following theorems.

Theorem 2:

In a  linear code (C ) of minimum distance dmin  is useful to detect up to t errors by following the condition.

dmin ≥ t + 1”

Theorem 3:

A linear code C of minimum distance dmin can correct up to t errors if and only if

dmin ≥ 2t + 1 "

, there exists a received word r such that d(ci, r) ≤ t, and yet r is as close to cj as it is to ci. Thus, the decoder may choose cj, which is incorrect.

Generator Matrix :

In a (n, k) linear block code C, we define a code vector c and a data (or message) vector d as follows:

c = [c1,c2,...,cn]

d = [d1,d2,...,dk]

If the data bits appear in specified location of C, then the code C is named systematic code. Otherwise, it is called non-systematic. Here we assume that the first k bits of c are the data bits and the last (n–k) bits are the parity-check bits formed by linear combination of data bits, that is,

ck+1 = p11d1 ⊕ p12d2 ⊕ ⋅  ⋅  ⋅  ⊕ p1kdk

ck+2 = p21d1 ⊕ p22d2 ⊕ ⋅  ⋅  ⋅  ⊕ p2kdk

  ⋮

ck+m = pm1d1 ⊕ pm2d2 ⊕ ⋅  ⋅  ⋅  ⊕ pmkdk

where m = n – k. Above equation can be written in a matrix form as:

byjusexamprep

where             G = [Ik PT]    

where Ik is the kth-order identity matrix and PT is the transpose of the matrix P given by

byjusexamprep

The k X n matrix G is called the Generator Matrix. Note that a generator matrix for C must have k rows and n columns, and it must have rank k; that is, the k rows of G are linearly independent.

Parity-Check Matrix:

Let H denote an m X n matrix defined by

H = [P   Im]              

where m = n – k and Im is the mth order identity matrix. Then

byjusexamprep      

Using above equations, we have

  byjusexamprep    

where 0 denotes the k × m zero matrix. Now we have,

cHT = dGHT = 0

where 0 denotes the 1 × m zero vector.

The matrix H is called the parity-check matrix of C. Note that the rank of H is m = n – k and the rows of H are linearly independent. The minimum distance dmin of a linear block code C is closely related to the structure of the parity-check matrix H of C.

Syndrome Decoding:

Let r denote the received word of length n when code word c of length n was sent over a noisy channel. Then, r = c ⊕ e, where 'e' is termed as the error pattern. Note that e = r + c.

let us consider first, the case of a single error in the ith position. Then we can represent e by e = [0 . . . 010 . . . 0]

Next, we evaluate rHT and obtain

rHT = (c ⊕ e)HT = cHT ⊕eHT = eHT = S    

Here, S is called the syndrome of r.

Thus, using s and noting that eHT is the ith row of HT. Now we can find the error position by comparing S to the of H.  Decoding by this simple comparison method is called syndrome decoding. Note that by using syndrome decoding technique all error patterns can correctly decoded. The zero syndrome indicates that r is a code word and is presumably correct.

With syndrome decoding, an (n, k) linear block code can correct up to t errors per codeword if n and k satisfy the following Hamming bound.

 byjusexamprep         

 

A block code for which the equality holds is known as the perfect code. Single error-correcting perfect codes are called Hamming codes.

NOTE:  The Hamming bound is necessary but not sufficient for the construction of a t-error correcting linear block code.

You can avail of BYJU’S Exam Prep Online classroom program for all AE & JE Exams:

BYJU’S Exam Prep Online Classroom Program for AE & JE Exams (12+ Structured LIVE Courses)

You can avail of BYJU’S Exam Prep Test series specially designed for all AE & JE Exams:

BYJU’S Exam Prep Test Series AE & JE Get Unlimited Access to all (160+ Mock Tests)

Thanks

Team BYJU’S Exam Prep

Download  BYJU’S Exam Prep APP, for the best Exam Preparation, Free Mock tests, Live Classes. 

Comments

write a comment

AE & JE Exams

AE & JEAAINBCCUP PoliceRRB JESSC JEAPPSCMPPSCBPSC AEUKPSC JECGPSCUPPSCRVUNLUPSSSCSDEPSPCLPPSCGPSCTNPSCDFCCILUPRVUNLPSPCLRSMSSB JEOthersPracticeMock TestCourse

Follow us for latest updates