# Dynamic Programming and Its Examples

By Mukesh Kumar|Updated : June 1st, 2023

Dynamic Programming is generally used to resolve various problems where overlapping subproblems are present. Recursive algorithms are the most common application for dynamic programming. Dynamic programming is utilized for optimization, and most optimization issues call for recursion. However, not all recursive issues can be solved using dynamic programming.

A recursion can only arrive at the answer through a divide and conquer strategy unless there are overlapping subproblems, like in the Fibonacci sequence problem. Here we will see the dynamic programming in detail along with the working, application, and comparison of Dynamic programming with the Greedy algorithm to understand the concept better.

## What is Dynamic Programming?

A difficult problem can be broken down into several smaller, simpler subproblems using dynamic programming, which then solves each of those subproblems just once and stores the results. To provide the optimum solution for the given problem, a dynamic programming algorithm will look at the previously resolved subproblems and combine their solutions.

When a solution can be recursively explained in terms of solutions to subproblems, it is employed (optimal substructure). In dynamic programming, the problem is solved in a bottom-up approach by building a table of solved subproblems that are used to solve larger ones.

## Working of Dynamic Programming

The basic components for dynamic programming are listed below. Dynamic programming applies to things with characteristics like:

• Those problems are when the subproblems overlap, and the substructures are optimal. In this context, optimum substructure denotes that it is possible to solve optimization problems by simply integrating the best solutions to each constituent subproblems.
• As we save the intermediate results in the case of dynamic programming, the space complexity would grow, but the time complexity would decrease.

## Dynamic Programming Algorithm

Dynamic programming is an algorithmic technique used to solve optimization problems by breaking them into overlapping subproblems, solving each subproblem only once, and storing the solutions for efficient reuse. It avoids redundant computations and improves efficiency by leveraging the principle of optimality.

The steps that dynamic programming takes are as follows:

• It divides the complex problem into simpler subproblems.
• It resolves these related problems in the best possible way.
• It keeps the outcomes of the smaller issues (memorization). Memorization is the process of storing the answers to subproblems.
• The same sub-problem is not calculated more than once due to their reuse.
• Calculate the outcome of the complex problem finally.

## Dynamic Programming vs Greedy Algorithms

Dynamic programming and Greedy algorithms both function as techniques for optimization. However, Greedy algorithms search for locally optimum solutions to obtain a global optimum. Therefore, Greedy algorithms are likely to make a prediction that appears optimal at the time but ends up being expensive later on and does not ensure a globally optimal solution.

On the other hand, Dynamic programming selects the most optimum solution after determining the best way to solve each subproblem individually. Check out the difference between Dynamic Programming and Greedy algorithm in detail.

## Dynamic Programming in DAA

Dynamic programming is a powerful algorithmic technique used in the field of Design and Analysis of Algorithms (DAA). It is particularly useful for solving optimization problems by breaking them down into smaller overlapping subproblems and efficiently solving each subproblem only once.

Dynamic programming is a fundamental technique in the field of DAA that enables efficient and optimized solutions to optimization problems with optimal substructure and overlapping subproblems.

## Application of Dynamic Programming

The application or computer problems with their time complexity can be solved using a Dynamic Programming approach that is as follows:

• Fibonacci number series(O(n))
• Knapsack problem(O(mn))
• All pairs' shortest path by Floyd-Warshall(O(n3))
• Longest common sub-sequence(O(mn))
• Matrix chain multiplication(O(n2))

## Dynamic Programming Examples

Here are a few examples of the problems that can be solved using dynamic programming techniques:

• Computing the nth Fibonacci number, where each number is the sum of the two preceding ones (F(n) = F(n-1) + F(n-2)).Fibonacci Sequence: Computing the nth Fibonacci number, where each number is the sum of the two preceding ones (F(n) = F(n-1) + F(n-2)).
• Finding the longest subsequence that is common to two or more sequences.
• Determining the most valuable combination of items to be placed into a 0/1 Knapsack Problem with a limited weight capacity.
• Calculating the minimum number of operations (insertion, deletion, substitution) required to transform one string into another.
• Determining the minimum number of coins needed to make change for a given target value using a given set of coin denominations.
 Important Topics for Gate Exam Admixtures Truss Bolted Connection Principle of superposition of forces Columns and Struts Design of Beams Polymers Huffman Coding Mechanical Properties of Engineering Materials Moulding Sand Crystal Defects Kruskal Algorithm Free body diagram Internal forces Influence line diagram Deflection of Beams Internal and external forces Lami's theorem Losses of Prestress Moment Distribution Method

write a comment

## Dynamic Programming FAQs

• According to the definition of dynamic programming, it is a method for solving difficult problems by first decomposing them into a number of simpler subproblems, solving each subproblem only once, and then storing the answers to prevent having to perform the same calculations repeatedly.

• Bellman adopted the word "dynamic" to sound impressive and accurately describe the problem's time-varying components. In the context of a military training or logistical plan, the term "programming" denoted the application of a technique to identify an optimal program.

• Since Java is made to change with the environment, it is thought to be more dynamic than C or C++. Java programs have the ability to store a large quantity of run-time data that can be utilized to validate and address object access issues in real-time.

• Recursion with memorization, also known as dynamic programming, is the process of calculating and storing values that may later be used to solve repeating subproblems, which speeds up your code and lowers the time complexity (computing CPU cycles are reduced).

• The father of dynamic programming is Richard E. Bellman, who lived from 1920 until 1984. He was the author of numerous works and the recipient of numerous awards, including the first Norbert Wiener Prize in Applied Mathematics.

### GATE & PSU CS

Computer ScienceGATEGATE CSENIELITOther ExamsQuick LinksMock Test GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303 help@byjusexamprep.com