What is Dynamic Programming?
A difficult problem can be broken down into several smaller, simpler subproblems using dynamic programming, which then solves each of those subproblems just once and stores the results. To provide the optimum solution for the given problem, a dynamic programming algorithm will look at the previously resolved subproblems and combine their solutions.
When a solution can be recursively explained in terms of solutions to subproblems, it is employed (optimal substructure). In dynamic programming, the problem is solved in a bottom-up approach by building a table of solved subproblems that are used to solve larger ones.
Working of Dynamic Programming
The basic components for dynamic programming are listed below. Dynamic programming applies to things with characteristics like:
- Those problems are when the subproblems overlap, and the substructures are optimal. In this context, optimum substructure denotes that it is possible to solve optimization problems by simply integrating the best solutions to each constituent subproblems.
- As we save the intermediate results in the case of dynamic programming, the space complexity would grow, but the time complexity would decrease.
The steps that dynamic programming takes are as follows:
- It divides the complex problem into simpler subproblems.
- It resolves these related problems in the best possible way.
- It keeps the outcomes of the smaller issues (memorization). Memorization is the process of storing the answers to subproblems.
- The same sub-problem is not calculated more than once due to their reuse.
- Calculate the outcome of the complex problem finally.
Application of Dynamic Programming
The application or computer problems with their time complexity can be solved using a Dynamic Programming approach that is as follows:
- Fibonacci number series(O(n))
- Knapsack problem(O(mn))
- All pairs' shortest path by Floyd-Warshall(O(n3))
- Longest common sub-sequence(O(mn))
- Matrix chain multiplication(O(n2))
Dynamic Programming vs Greedy Algorithms
Dynamic programming and Greedy algorithms both function as techniques for optimization. However, Greedy algorithms search for locally optimum solutions to obtain a global optimum. Therefore, Greedy algorithms are likely to make a prediction that appears optimal at the time but ends up being expensive later on and does not ensure a globally optimal solution.
On the other hand, Dynamic programming selects the most optimum solution after determining the best way to solve each subproblem individually. Check out the difference between Dynamic Programming and Greedy algorithm in detail.