Dynamic Programming: What is Dynamic Programming and its Examples

By Mukesh Kumar|Updated : August 3rd, 2022

Dynamic Programming is generally used to resolve various problems where overlapping subproblems are present. Recursive algorithms are the most common application for dynamic programming. Dynamic programming is utilized for optimization, and most optimization issues call for recursion. However, not all recursive issues can be solved using dynamic programming.

A recursion can only arrive at the answer through a divide and conquer strategy unless there are overlapping subproblems, like in the Fibonacci sequence problem. Here we will see the dynamic programming in detail along with the working, application, and comparison of Dynamic programming with the Greedy algorithm to understand the concept better.

Table of Content

What is Dynamic Programming?

A difficult problem can be broken down into several smaller, simpler subproblems using dynamic programming, which then solves each of those subproblems just once and stores the results. To provide the optimum solution for the given problem, a dynamic programming algorithm will look at the previously resolved subproblems and combine their solutions.

When a solution can be recursively explained in terms of solutions to subproblems, it is employed (optimal substructure). In dynamic programming, the problem is solved in a bottom-up approach by building a table of solved subproblems that are used to solve larger ones.

Working of Dynamic Programming

The basic components for dynamic programming are listed below. Dynamic programming applies to things with characteristics like:

  • Those problems are when the subproblems overlap, and the substructures are optimal. In this context, optimum substructure denotes that it is possible to solve optimization problems by simply integrating the best solutions to each constituent subproblems.
  • As we save the intermediate results in the case of dynamic programming, the space complexity would grow, but the time complexity would decrease.

The steps that dynamic programming takes are as follows:

  • It divides the complex problem into simpler subproblems.
  • It resolves these related problems in the best possible way.
  • It keeps the outcomes of the smaller issues (memorization). Memorization is the process of storing the answers to subproblems.
  • The same sub-problem is not calculated more than once due to their reuse.
  • Calculate the outcome of the complex problem finally.

Application of Dynamic Programming

The application or computer problems with their time complexity can be solved using a Dynamic Programming approach that is as follows:

  • Fibonacci number series(O(n))
  • Knapsack problem(O(mn))
  • All pairs' shortest path by Floyd-Warshall(O(n3))
  • Longest common sub-sequence(O(mn))
  • Matrix chain multiplication(O(n2))

Dynamic Programming vs Greedy Algorithms

Dynamic programming and Greedy algorithms both function as techniques for optimization. However, Greedy algorithms search for locally optimum solutions to obtain a global optimum. Therefore, Greedy algorithms are likely to make a prediction that appears optimal at the time but ends up being expensive later on and does not ensure a globally optimal solution.

On the other hand, Dynamic programming selects the most optimum solution after determining the best way to solve each subproblem individually. Check out the difference between Dynamic Programming and Greedy algorithm in detail.

Important Topics for Gate Exam
Bolted ConnectionPrinciple of superposition of forces
Columns and StrutsDesign of Beams
PolymersHuffman Coding
Mechanical Properties of Engineering MaterialsMoulding Sand
Crystal DefectsKruskal Algorithm
Free body diagramInternal forces
Influence line diagramDeflection of Beams
Internal and external forcesLami's theorem
Losses of PrestressMoment Distribution Method


write a comment

Dynamic Programming FAQs

  • According to the definition of dynamic programming, it is a method for solving difficult problems by first decomposing them into a number of simpler subproblems, solving each subproblem only once, and then storing the answers to prevent having to perform the same calculations repeatedly. 

  • Bellman adopted the word "dynamic" to sound impressive and accurately describe the problem's time-varying components. In the context of a military training or logistical plan, the term "programming" denoted the application of a technique to identify an optimal program.

  • Since Java is made to change with the environment, it is thought to be more dynamic than C or C++. Java programs have the ability to store a large quantity of run-time data that can be utilized to validate and address object access issues in real-time.

  • Recursion with memorization, also known as dynamic programming, is the process of calculating and storing values that may later be used to solve repeating subproblems, which speeds up your code and lowers the time complexity (computing CPU cycles are reduced).

  • The father of dynamic programming is Richard E. Bellman, who lived from 1920 until 1984. He was the author of numerous works and the recipient of numerous awards, including the first Norbert Wiener Prize in Applied Mathematics.

Follow us for latest updates