4. Dynamic Programming

Dynamic programming is an efficient technique for solving many combinatorial optimization problems in a polynomial time.

Dynamic programming is both a mathematical optimization method and a computer programming method. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner [1]. There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems.

Principle of Optimality

An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision [2]. (See Bellman, 1957, Chap. III.3.)

4.1. Binomial Coefficient

In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem [3]. Commonly, a binomial coefficient is indexed by a pair of integers $ n ≥ k ≥ 0 $ and is written $ \tbinom {n}{k} $. It is the coefficient of the $ x^k $ term in the polynomial expansion of the binomial power $ (1 + x)^n $, and it is given by the formula:

$$ \tbinom {n}{k} = \frac {n!}{k!(n-k)!} \tag{1}$$

4.1.1. Formula approach

4.1.2. Simple approach

4.1.3. Dynamic Programming

With time complexity of $ \Theta(nk) $ and a space complexity of $ \Theta(k) $.

4.2. World Championship problem

4.2.1. Simple approach

4.2.2. Dynamic Programming

With time complexity of $ \Theta(n^2) $ and a space complexity of $ \Theta(n^2) $.

4.3. Coin Change problem

The coin-change problem or change-making problem addresses the question of finding the minimum number of coins (of certain denominations) that add up to a given amount of money. It is a special case of the integer knapsack problem, and has applications wider than just currency [4].

Returns all possible combinations of coins change with Dynamic Programming

Version with unlimited supply of coins.

With time complexity of $ \Theta(nN) $ and a space complexity of $ \Theta(n(N + 1)) $.

Calculate the list of coins needed to give change

Greedy approach

With time complexity of $ \Theta(n + c[n, N]) $ and a space complexity of $ \Theta(n(N + 1)) $.

4.4. The Knapsack problem

The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit W and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items [5].

Get best items combination with Dynamic Programming

With time complexity of $ \Theta(nW) $ and a space complexity of $ \Theta(n(W + 1)) $.

Calculate the list of items needed to fill the backpack

Greedy approach

With time complexity of $ \Theta(n + W) $ and a space complexity of $ \Theta(n(W + 1)) $.

4.5. Longest Common Subsequence (LCS) problem

The longest common subsequence (LCS) problem is the problem of finding the longest subsequence common to all sequences in a set of sequences (often just two sequences). It differs from the longest common substring problem: unlike substrings, subsequences are not required to occupy consecutive positions within the original sequences [6].

The longest common subsequence problem is a classic computer science problem, the basis of data comparison programs such as the diff utility, and has applications in computational linguistics and bioinformatics. It is also widely used by revision control systems such as Git for reconciling multiple changes made to a revision-controlled collection of files.

Get the Longest Common Subsequence with Dynamic Programming

With time complexity of $ \Theta(nm) $ and a space complexity of $ \Theta((n + 1)(m + 1)) $.

Calculate the Longest Common Subsequence

Greedy approach

With time complexity of $ \Theta(n + m) $ and a space complexity of $ \Theta((n + 1)(m + 1)) $.

4.6. Sequence Alignment problem

In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences [7]. Aligned sequences of nucleotide or amino acid residues are typically represented as rows within a matrix. Gaps are inserted between the residues so that identical or similar characters are aligned in successive columns.

Sequence alignments are also used for non-biological sequences, such as calculating the distance cost between strings in a natural language or in financial data.

4.6.1. The Needleman-Wunsch algorithm will be used

It was one of the first applications of dynamic programming to compare biological sequences. The algorithm essentially divides a large problem (e.g. the full sequence) into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem [8].

With time complexity of $ \Theta(nm) $ and a space complexity of $ \Theta((n + 1)(m + 1)) $.

Calculate the Sequence Alignment result

Greedy approach

With time complexity of $ \Theta(n + m) $ and a space complexity of $ \Theta((n + 1)(m + 1)) $.

4.7. All-Pairs Shortest Path

The all-pairs shortest path problem is the determination of the shortest graph distances between every pair of vertices in a given graph. The problem can be solved using n applications of Dijkstra's algorithm or all at once using the Floyd-Warshall algorithm [9].

Please click here to see an example of both algorithms in the Graphs section. The second one is solved with dynamic programming.

Reference

[1] Wikipedia - Dynamic Programming.
[2] Wikipedia - Principle of Optimality.
[3] Wikipedia - Binomial coefficient.
[4] Wikipedia - Change-making problem.
[5] Wikipedia - Knapsack problem.
[6] Wikipedia - Longest common subsequence problem.
[7] Wikipedia - Sequence alignment problem.
[8] Wikipedia - Needleman-Wunsch algorithm.
[9] Wikipedia - All-Pairs Shortest Path.


« Home