Stone balancing

You have some stones with famous weights w1, ..., wn. Write a program that will rebuild the stones in two piles so that the weight difference between the two piles is minimal. I have a dp algorithm:

int max(int a, int b){ return a >= b ? a : b; } int diff(int* weights, int number_elements, int capacity){ int **ptrarray = new int* [capacity + 1]; for (int count = 0; count <= capacity; count++) { ptrarray[count] = new int [number_elements + 1]; } for (int i = 0; i <= number_elements; ++i){ ptrarray[0][i] = 0; } for (int i = 0; i <= capacity; ++i){ ptrarray[i][0] = 0; } for (int i = 1; i <= number_elements; ++i){ for (int j = 1; j <= capacity; ++j){ if(weights[i - 1] <= j){ ptrarray[j][i] = max(ptrarray[j][i - 1], ptrarray[j - weights[i - 1]][i-1] + weights[i - 1]); } else{ ptrarray[j][i] = ptrarray[j][i - 1]; } } } return ptrarray[capacity][number_elements]; } int main(){ int capacity; int number_elements; cin >> number_elements; int* weights = new int[number_elements]; int sum = 0; int first_half; for (int i = 0; i < number_elements; ++i){ cin >> weights[i]; sum+=weights[i]; } first_half = sum / 2; int after; after = diff(weights, number_elements, first_half); cout << sum - 2*after; return 0; } 

But this is a bit naive. It requires too much memory, and I need some hints to simplify it. Is there a more efficient approach?

+5
source share
1 answer

You can reduce memory usage by making the following observations:

  • At any moment, your code uses only no more than two levels of the ptrarray array.

  • If you iterate from the largest to the lowest index in each layer, you can overwrite the previous layer. This way you only need one array.

Here is the pseudocode with this optimization:

 max_weight = new int[max_capacity + 1](false) max_weight[0] = true for weight in weights: for capacity in [max_capacity ... weight]: max_weight[capacity] = max(max_weight[capacity], max_weight[capacity - weight] + weight 

This requires O(max_capacity) memory (instead of O(max_capacity * number_of_items) ).

A few additional optimizations: you can use a boolean array (to indicate whether the sum i reachable) and select the largest achievable amount at the end, rather than store the largest amount less than or equal to i addition, you can use std::bitset instead of the boolean array to get the time complexity of O(max_capacity * num_items / world_len) (where world_len is the size of the largest integer type on which the machine can perform logical operations). Adding one weight will look like reachable |= (reachable << weight) .

So, the final version looks like this:

 reachable = bitset(max_capacity + 1) reachable[0] = true for weight in weights: reachable |= reachable << weight return highest set bit of reachable 

The code becomes much simpler and more efficient in this way (the time complexity is technically the same, but in practice it is much faster).

There is a caveat here: you need to know the size of std::bitset at compile time, so if this is not possible, you will need a different bit set implementation.

+2
source

Source: https://habr.com/ru/post/1264977/


All Articles