Dynamic Programming
What is Dynamic Programming?
Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them down into simpler subproblems and storing their results to avoid redundant calculations.
The key idea: If you've already solved a subproblem, don't solve it againβjust look up the answer!
Two fundamental principles:
- Overlapping subproblems - the same smaller problems are solved multiple times
- Optimal substructure - the optimal solution can be built from optimal solutions to subproblems
Why it matters: DP can transform exponentially slow algorithms into polynomial or even linear time algorithms by trading memory for speed.
Prerequisites: Why Dictionaries Are Perfect for DP
Before diving into dynamic programming, you should understand Python dictionaries. If you're not comfortable with dictionaries yet, review them firstβthey're the foundation of most DP solutions.
Quick dictionary essentials for DP:
# Creating and using dictionaries
cache = {} # Empty dictionary
# Store results
cache[5] = 120
cache[6] = 720
# Check if we've seen this before
if 5 in cache: # O(1) - instant lookup!
print(cache[5])
# This is why dictionaries are perfect for DP!
Why dictionaries work for DP:
- O(1) lookup time - checking if a result exists is instant
- O(1) insertion time - storing a new result is instant
- Flexible keys - can store results for any input value
- Clear mapping - easy relationship between input (key) and result (value)
Now let's see DP in action with a classic example.
The Classic Example: Fibonacci
The Fibonacci sequence is perfect for understanding DP because it clearly shows the problem of redundant calculations.
The Problem: Naive Recursion
Fibonacci definition:
- F(0) = 0
- F(1) = 1
- F(n) = F(n-1) + F(n-2)
Naive recursive solution:
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(10)) # 55
# Try fibonacci(40) - it takes forever!
Why Is This So Slow?
Look at the redundant calculations for fibonacci(5):
fibonacci(5)
βββ fibonacci(4)
β βββ fibonacci(3)
β β βββ fibonacci(2)
β β β βββ fibonacci(1) β Calculated
β β β βββ fibonacci(0) β Calculated
β β βββ fibonacci(1) β Calculated AGAIN
β βββ fibonacci(2) β Calculated AGAIN
β βββ fibonacci(1) β Calculated AGAIN
β βββ fibonacci(0) β Calculated AGAIN
βββ fibonacci(3) β Entire subtree calculated AGAIN
βββ fibonacci(2) β Calculated AGAIN
β βββ fibonacci(1) β Calculated AGAIN
β βββ fibonacci(0) β Calculated AGAIN
βββ fibonacci(1) β Calculated AGAIN
The numbers:
- fibonacci(1) is calculated 5 times
- fibonacci(2) is calculated 3 times
- fibonacci(3) is calculated 2 times
For fibonacci(40), you'd do 331,160,281 function calls. That's insane for a simple calculation!
Time complexity: O(2^n) - exponential! Each call spawns two more calls.
Dynamic Programming Solution: Memoization
Memoization = storing (caching) results we've already calculated using a dictionary.
# Dictionary to store computed results
memo = {}
def fibonacci_dp(n):
# Check if we've already calculated this
if n in memo:
return memo[n]
# Base cases
if n <= 1:
return n
# Calculate, store, and return
result = fibonacci_dp(n - 1) + fibonacci_dp(n - 2)
memo[n] = result
return result
# First call - calculates and stores results
print(fibonacci_dp(10)) # 55
print(memo) # {2: 1, 3: 2, 4: 3, 5: 5, 6: 8, 7: 13, 8: 21, 9: 34, 10: 55}
# Subsequent calls - instant lookups!
print(fibonacci_dp(50)) # 12586269025 (instant!)
print(fibonacci_dp(100)) # Works perfectly, still instant!
How Memoization Works: Step-by-Step
Let's trace fibonacci_dp(5) with empty memo:
Call fibonacci_dp(5):
5 not in memo
Calculate: fibonacci_dp(4) + fibonacci_dp(3)
Call fibonacci_dp(4):
4 not in memo
Calculate: fibonacci_dp(3) + fibonacci_dp(2)
Call fibonacci_dp(3):
3 not in memo
Calculate: fibonacci_dp(2) + fibonacci_dp(1)
Call fibonacci_dp(2):
2 not in memo
Calculate: fibonacci_dp(1) + fibonacci_dp(0)
fibonacci_dp(1) = 1 (base case)
fibonacci_dp(0) = 0 (base case)
memo[2] = 1, return 1
fibonacci_dp(1) = 1 (base case)
memo[3] = 2, return 2
Call fibonacci_dp(2):
2 IS in memo! Return 1 immediately (no calculation!)
memo[4] = 3, return 3
Call fibonacci_dp(3):
3 IS in memo! Return 2 immediately (no calculation!)
memo[5] = 5, return 5
Final memo: {2: 1, 3: 2, 4: 3, 5: 5}
Notice: We only calculate each Fibonacci number once. All subsequent requests are instant dictionary lookups!
Time complexity: O(n) - we calculate each number from 0 to n exactly once
Space complexity: O(n) - we store n results in the dictionary
Comparison:
- Without DP: fibonacci(40) = 331,160,281 operations β°
- With DP: fibonacci(40) = 40 operations β‘
That's over 8 million times faster!
Top-Down vs Bottom-Up Approaches
There are two main ways to implement DP:
Top-Down (Memoization) - What We Just Did
Start with the big problem and recursively break it down, storing results as you go.
memo = {}
def fib_topdown(n):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib_topdown(n - 1) + fib_topdown(n - 2)
return memo[n]
Pros:
- Intuitive if you think recursively
- Only calculates what's needed
- Easy to add memoization to existing recursive code
Cons:
- Uses recursion (stack space)
- Slightly slower due to function call overhead
Bottom-Up (Tabulation) - Build From Smallest
Start with the smallest subproblems and build up to the answer.
def fib_bottomup(n):
if n <= 1:
return n
# Build table from bottom up
dp = {0: 0, 1: 1}
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
print(fib_bottomup(10)) # 55
Even more optimized (space-efficient):
def fib_optimized(n):
if n <= 1:
return n
# We only need the last two values
prev2, prev1 = 0, 1
for i in range(2, n + 1):
current = prev1 + prev2
prev2, prev1 = prev1, current
return prev1
print(fib_optimized(100)) # 354224848179261915075
Pros:
- No recursion (no stack overflow risk)
- Can optimize space usage (we did it above!)
- Often slightly faster
Cons:
- Less intuitive at first
- Calculates all subproblems even if not needed
When to Use Dynamic Programming
Use DP when you spot these characteristics:
1. Overlapping Subproblems
The same calculations are repeated many times.
Example: In Fibonacci, we calculate F(3) multiple times when computing F(5).
2. Optimal Substructure
The optimal solution to the problem contains optimal solutions to subproblems.
Example: The optimal path from A to C through B must include the optimal path from A to B.
3. You Can Define a Recurrence Relation
You can express the solution in terms of solutions to smaller instances.
Example: F(n) = F(n-1) + F(n-2)
Common DP Problem Patterns
1. Climbing Stairs
Problem: How many distinct ways can you climb n stairs if you can take 1 or 2 steps at a time?
def climbStairs(n):
if n <= 2:
return n
memo = {1: 1, 2: 2}
for i in range(3, n + 1):
memo[i] = memo[i - 1] + memo[i - 2]
return memo[n]
print(climbStairs(5)) # 8
# Ways: 1+1+1+1+1, 1+1+1+2, 1+1+2+1, 1+2+1+1, 2+1+1+1, 1+2+2, 2+1+2, 2+2+1
Key insight: This is actually Fibonacci in disguise! To reach step n, you either came from step n-1 (one step) or step n-2 (two steps).
2. Coin Change
Problem: Given coins of different denominations, find the minimum number of coins needed to make a target amount.
def coinChange(coins, amount):
# dp[i] = minimum coins needed to make amount i
dp = {0: 0}
for i in range(1, amount + 1):
min_coins = float('inf')
# Try each coin
for coin in coins:
if i - coin >= 0 and i - coin in dp:
min_coins = min(min_coins, dp[i - coin] + 1)
if min_coins != float('inf'):
dp[i] = min_coins
return dp.get(amount, -1)
print(coinChange([1, 2, 5], 11)) # 3 (5 + 5 + 1)
print(coinChange([2], 3)) # -1 (impossible)
The DP Recipe: How to Solve DP Problems
-
Identify if it's a DP problem
- Do you see overlapping subproblems?
- Can you break it into smaller similar problems?
-
Define the state
- What information do you need to solve each subproblem?
- This becomes your dictionary key
-
Write the recurrence relation
- How do you calculate dp[n] from smaller subproblems?
- Example: F(n) = F(n-1) + F(n-2)
-
Identify base cases
- What are the smallest subproblems you can solve directly?
- Example: F(0) = 0, F(1) = 1
-
Implement and optimize
- Start with top-down memoization (easier to write)
- Optimize to bottom-up if needed
- Consider space optimization
Common Mistakes to Avoid
1. Forgetting to Check the Cache
# Wrong - doesn't check memo first
def fib_wrong(n):
if n <= 1:
return n
memo[n] = fib_wrong(n - 1) + fib_wrong(n - 2) # Calculates every time!
return memo[n]
# Correct - checks memo first
def fib_correct(n):
if n in memo: # Check first!
return memo[n]
if n <= 1:
return n
memo[n] = fib_correct(n - 1) + fib_correct(n - 2)
return memo[n]
2. Not Storing the Result
# Wrong - calculates but doesn't store
def fib_wrong(n):
if n in memo:
return memo[n]
if n <= 1:
return n
return fib_wrong(n - 1) + fib_wrong(n - 2) # Doesn't store!
# Correct - stores before returning
def fib_correct(n):
if n in memo:
return memo[n]
if n <= 1:
return n
memo[n] = fib_correct(n - 1) + fib_correct(n - 2) # Store it!
return memo[n]
3. Using Mutable Default Arguments
# Wrong - memo persists between calls!
def fib_wrong(n, memo={}):
# ...
# Correct - create fresh memo or pass it explicitly
def fib_correct(n, memo=None):
if memo is None:
memo = {}
# ...
Summary
Dynamic Programming is about:
- Recognizing overlapping subproblems
- Storing solutions to avoid recalculation
- Trading memory for speed
Key techniques:
- Top-down (memoization): Recursive + dictionary cache
- Bottom-up (tabulation): Iterative + build from smallest
When to use:
- Same subproblems solved repeatedly
- Optimal substructure exists
- Can define recurrence relation
The power of DP:
- Transforms exponential O(2^n) β linear O(n)
- Essential for many algorithmic problems
- Dictionaries make implementation clean and fast
Remember: Not every problem needs DP! Use it when you spot repeated calculations. Sometimes a simple loop or greedy algorithm is better.
Practice Problems to Try
- House Robber - Maximum money you can rob from houses without robbing adjacent ones
- Longest Common Subsequence - Find longest sequence common to two strings
- Edit Distance - Minimum operations to convert one string to another
- Maximum Subarray - Find contiguous subarray with largest sum
- Unique Paths - Count paths in a grid from top-left to bottom-right
Each of these follows the same DP pattern we've learned. Try to identify the state, recurrence relation, and base cases!