NIST

dynamic programming

(algorithmic technique)

Definition: Solve an optimization problem by caching subproblem solutions (memoization) rather than recomputing them.

Aggregate parent (I am a part of or used in ...)
Smith-Waterman algorithm Solves these problems: matrix-chain multiplication problem, longest common substring, longest common subsequence.

See also greedy algorithm, principle of optimality.

Note: From Algorithms and Theory of Computation Handbook, page 1-26, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.

Author: CRC-A

Implementation

Mark Nelson's tutorial to using C++ Hash Table Memoization: [for] Simplifying Dynamic Programming (C++). Oleg Kiselyov's program to optimally lay out a page (C++) using dynamic programming.

More information

Dan Hirschberg's example of using dynamic programming for matrix multiplication and longest common substring and subsequence, including pseudocode.


Go to the Dictionary of Algorithms and Data Structures home page.

If you have suggestions, corrections, or comments, please get in touch with Paul E. Black.

Entry modified 2 November 2007.
HTML page formatted Tue Dec 6 16:16:32 2011.

Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "dynamic programming", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 2 November 2007. (accessed TODAY) Available from: http://www.nist.gov/dads/HTML/dynamicprog.html

to NIST home page