Dynamic programming - core cutting

I looked at CLRS the other day to refresh my thoughts a bit and ran into the problem of classic cutting of rods.

The classic bottom-up solution for this is this:

1: let r[0..n] be a new array 2: r[0] = 0 3: for j = 1 to n 4: q = -1 5: for i = 1 to j 6: q = max(q, p[i] + r[ji]) 7: r[j] = q 8: return r[n] 

Now I'm thinking about something. Why do we keep using p [i] on L.6? I mean, let's say that we have j = 4, then he calculated the following combinations:

 1 + 3 2 + 2 3 + 1 4 + 0 

what is the calculation point "3 + 1" twice. I suggest not using p [] and using r [] and stopping on the floor (j / 2).

 1: let r[0..n] be a new array 2: r[0] = 0 3: for j = 1 to n 4: q = p[j] 5: for i = 1 to floor(j/2) 6: q = max(q, r[i] + r[ji]) 7: r[j] = q 8: return r[n] 

Does anyone see something wrong with this approach?

Thanks,

+6
source share
3 answers

There is nothing wrong with your optimization. I saw how it was used / mentioned several times before in the problem of cutting the rod (here is just one example: http://users.csc.calpoly.edu/~dekhtyar/349-Spring2010/lectures/lec09.349.pdf ). There is no such thing as a perfect textbook. Perhaps it was a mistake in the CLRS part, or they just felt that mentioning such optimizations would clog the book.

Typically, such introductory books will focus on high-level concepts and leave the search for such trivial optimizations to the reader. Consider the sorting of bubbles: not everyone would mention the fact of simple optimization that the inner loop j should only rise to i .

+2
source

I know this is an old question, but I feel that the question was only half. To answer the first part of your question:

what is the calculation point "3 + 1" twice really

I wanted to indicate that there is a footnote in the book that says:

If we demanded that the pieces be cut in non-decreasing size, there would be fewer ways to consider. For n = 4, we would consider only 5 such methods. [...] However, we will not continue this line of investigation.

There is nothing wrong with your optimization. I think this was specifically excluded from the book in order to facilitate subsequent exercises for students.

0
source

Your optimization is good. There is a much more efficient way to do this with a single loop instead of 2 loops:

1: let r[0..n] be a new array
2: r[0] = 0
3: q=-1
4: for j = 1 to floor(n/2)
5: q = max(q, r[j] + r[nj])
6: r[j] = q
7: return r[floor(n/2)]

0
source

Source: https://habr.com/ru/post/895931/


All Articles