The target audience for this article is the people who are interested in learning dynamic programming from the very basics. There are many courses for dynamic programming but the purpose of this article is to make it as simple as possible so that the audience can easily grab what is being taught and can learn something out of it.

This doesn’t mean that we will only discuss the basics, but by the end of this article, you’ll be able to solve pretty much all the dynamic programming problems.

Dynamic programming is a technique to solve a certain set of problems with the help of dividing it into smaller problems. Dynamic programming applies just to the kind of problems that have certain properties and can be solved in a certain way.

This technique is used in many real-world applications and different domains such as Maths, game theory economics, statistics, linguistics, computer science, and so forth. Dynamic programming can also be applied to the everyday type of problems as well.

The most powerful thing about dynamic programming is the fact that it can be used to solve many problems in polynomial time for which a naive approach would take exponential time. It means that dynamic programming is an optimization and is a method to optimize the solution to a problem and as we’ve just studied in many cases we can optimize from an exponential time down to polynomial runtime complexity.

Sometimes it is possible to apply a few more other ideas and optimize the solution even further and get a linear runtime complexity.

If you want to solve a problem with dynamic programming it has to have two properties which are as follows:

## Optimal substructure in Dynamic Programming

The term optical substructure has 2 components in it. Optimal simply means the best or most favorable whereas substructure means a subproblem of the main problem.

A problem has optimal substructure property when it’s the optimal solution that can be constructed from the optimal solutions of its subproblems. So by solving each subproblem in its most optimal way we obtain the optimal solution to the original problem.

For example, if there’s a complex problem and we don’t know how to solve it at once we will solve the complex problems step by step. So instead of solving the entire problem at once, we will first solve a small piece of the problem say X1, and will call it our sub problem.

So we solve X in its most optimal way and when we have a solution we can extend our problem space and solve X2. In order to solve X2 we don’t have to recalculate everything from the start because we can use the result of X1 to solve X2 and by doing so we get optimal solution for X2 and we repeat this process until we solve the entire problem.

We simply broke down the problem into many subproblems and solve those in an optimal way

Overlapping subproblems in Dynamic Programming

When you break a problem into many sub problems you will notice that sometimes you need to recalculate some work multiple times. when that happens we say a problem has overlapping subproblems property and such problem is a candidate for dynamic programming.

Dynamic problems can be divided into two categories

## Combinatorial problems in Dynamic programming

Combinatorial type of problems answer the question how many? Here are a few examples of combinatorial problems:

- How many ways to make a change?
- How many ways to traverse the graph?
- How many steps needed to get from point A to point B?

In this type of problems your end goal is to count something.

## Optimization problems in Dynamic Programming

In optimization problems we are interested in finding a strategy which maximizes or minimizes some function. Yep examples for optimization problems can be:

- What is the minimum number of steps needed to get to a certain destination?
- What is the maximum profit gained by buying and selling stock?
- What is the minimum cost to travel from New York to Mumbai?

So in the optimization type of problems your end goal is to minimize or maximize some function.

## Memoization Technique of Dynamic prgramming

In this, we cache the result of computation so that we can reuse it later, meaning that we optimize the time complexity by giving up space. It’s a time memory trade off. It’s just like storing a pre solved problem. It helps in speeding up the programs of the computer.

### Climbing Stairs Problem

This is a climbing stairs problem, this is a popular problem that you may be asked during the phone screen interview at Google, Amazon, Facebook etc. Phone screen interviews are generally simpler than on-site interviews. There are many different variations of the problem and one of them is that there is a staircase and your goal is to answer the question in how many distinct ways can you climb to the top.

We can either reach the top of 4 stairs by climbing one step at a time, by taking a small step and then a jump and then again a small step, by taking two steps first and then one step and one step, by taking one by one step and then taking two steps at a time or by taking two steps at a time.

So in total we have five different ways to get to the top of the staircase. So how do we solve this problem? Well the first thing we need to do is to express our goal as an objective function. The objective function is the function that you want to minimize or maximize in your problem.

### Framework and steps to solve Dynamic Programming Problem

The steps that we want to take when we solve the dynamic programming problem are:

- Define objective function
- Identifying the base cases: These are boundaries of your problem, the starting point of the dynamic programming algorithm.
- Recurrence relation: It is basically the transition function. It’s the function that allows you to transition from one subproblem to the other subproblem
- Order of computation: It means we need to determine the order in which problems are solved.
- Location of the answer: This basically means where we are looking for the answer.