How is “dynamic” programming different from “normal” programming?

Whenever I look at computer competition solutions, I always see the term “dynamic programming”. I parsed this term and read several articles, but none of them is a simple example of VS programming of “dynamic” programming. So how does “dynamic” programming differ from “normal” programming? (simple terms please!)

+4
source share
4 answers

Dynamic programming uses programming more in the sense that linear programming is used - a mechanism for solving a problem.

One description that I recently read (but I can’t remember the source anymore - [edit]) suggested that the usual divide and win approach is used in recursion - this is a top-down approach to problem solving, while dynamic programming is an upward approach to problem solving .

The Wikipedia article proposes to calculate the Fibonocci sequence - an excellent use of dynamic programming - you memoize it when calculating it for further use in the algorithm to avoid recalculating similar results.

Knut’s algorithm for paragraph lines is another good example of dynamic programming: if you are considering inserting line breaks between each word (and even breaking lines within words, in hyphenation points), it seems that the only algorithms will be exponential or worse. However, by tracking the “badness” associated with previous line breaks, the Knuth algorithm actually works in linear time with input size. (I must admit that I do not fully understand Knuth's algorithm - only that it is highly intelligent.)

+3
source

By "normal" programming, I think you mean programming in C ++ / Java / C #, right?

Dynamic programming in this sense is not "programming". This is not about writing code, but the word "programming" is used in the context of solving complex problems, breaking them into simpler problems.

+3
source

Dynamic programming is actually not "programming", but a search in the table [and storage]. This is a sacrifice of a little space to improve temporal complexity [quite a bit].

0
source

I knew this was an old post, but I had the same questions, so I answer myself here.

Dynamic programming has two properties:

  • Optimal substructure : A globally optimal solution can be found by combining optimal solutions with local subtasks. For example, fib (x) = fib (x - 1) + fib (x - 2)
  • Overlapping subtasks : finding the best solution involves solving the same problem several times. For example, calculate fib (x) many times in a Fibonocci sequence

In accordance with the above definitions / properties, it is unclear whether questions such as “Does the element have a set” are asked? or “How to find the amount of a set” can be classified as dynamic programming? I can split the set into a subset (resolve globally) and add it (get a global response). In addition, in the subset, I did the summation many times.

I found a paragraph in the book, and I think it gives very useful tips to distinguish between Dynamic Programming (DP) and Split Algorithm and Overcome (D&C).

  • In the D&C subtasks, they are significantly smaller than the original task. In contrast, DP involves solving problems that are only slightly smaller than the original problem. For example, calculating Fib (19) is not a substantially smaller problem than computing Fib (20). Although the sum of the calculations of ten elements is significantly less than the sum of ten million elements.

  • The efficiency of D & C algorithms does not depend on the structure of the algorithm, so that identical tasks are solved repeatedly. In contrast, DP is only effective when the number of individual subtasks is significantly less than the total number of subtasks.

0
source

Source: https://habr.com/ru/post/1398911/


All Articles