The number of combinations with plastic brick LEGO C ++

You have several LEGO plastic bricks, all 1x1x1 bricks. You also have one tile, 1xN (N & lt; = 80), on which you must lay LEGO bricks. You can arrange them in sequences (one sequence is correct if there are 3 or more consecutive blocks in it), also you should have at least 1 empty space between 2 sequences. You must calculate the number of different combinations that you can put bricks on the tile.

Here is an example:

If the tile size is 1x7, there are 17 different combinations.

input: 7 output: 17

pic of the example
(source: mendo.mk )

In addition, if you do not have bricks, this counts as 1 combination.

I worked on this problem and I found a way to calculate the possible combinations if the maximum tile length is 14 (3 sequences). I found this using for loops.

My biggest problem is the sheer number of for loops I need to run. For example, for 1 sequence I use 1 for cycles, for 2 sequences 2 cycles + 1 for 1 sequence ... so if I use all 80 blocks, I can create 20 sequences and I have to use more than 210 for cycles, which is huge quantity So it would be nice if I could invest them in one. I tried it and it gets dirty and it does not give the correct answers.

New code:

#include <iostream> using namespace std; int main() { long long int places, combinations = 1; cin >> places; long long int f[80], g[80]; f[0] = 0; f[1] = 0; f[2] = 0; g[0] = 1; g[1] = 1; g[2] = 1; for(int i = 3; i<=places; i++) { f[i] = f[i-1] + g[i-3]; g[i] = f[i-1] + g[i-1]; } combinations = f[places] + g[places]; cout << combinations; return 0; } 
+6
source share
2 answers

If this is a counting problem (not a combination issue, but just a counting), then this is easy. Suppose we solve it for n β‰₯ 3, now, to solve it for n + 1, we solve it by induction:

Suppose f is a function that shows the number of possible ways the last element is a brick. Similarly, g is a function that shows the number of possible ways so that the last element is not brick. Let's define h = f+g to be the number of all possible paths.

So we have:

 f(n+1) = f(n) + g(n-2) g(n+1) = g(n) + f(n) 

With the initial condition:

 for n=0,1,2: g=1, f= 0. for n = 3: g=1,f=1 

Samples:

 n=4: g=2,f=2 ==> h=4 n=5: g=4, f= 3 ==> h=7 n=6: g=7, f= 4 ==> h=11 n=7: g=11,f=6 ==> h=17 

We can solve this with a single for loop in O(n) .


What for:

 f(n+1) = f(n) + g(n-2) g(n+1) = g(n) + f(n) 

First, let me prove the first part:

Remember that we assumed that f (n) is a working solution in which the last element has a plastic brick, and g (n) is a working solution in which there is no brick in the last element.

f (n + 1) can be obtained from f (n) by adding one brick in the last place. Also f (n + 1) can be obtained by adding three bricks after g (n-2), which means that the cells are n-1, n, n + 1.

Note that we cannot add bricks after g (n-1) or g (n) to create the correct solution for f (n + 1), because they are not real solutions (the number of consecutive bricks is less than 3). Also note that we do not need to count the number of methods that arise when adding cubes after g (n-3), because they were previously listed by f (n). Thus, we have f(n+1) = f(n) + g(n-2) .

In the same way, we can prove that g(n+1) = f(n)+g(n) this case is simpler because g (n + 1) can simply be made from any real solution up to n , since Here There are no 3 consecutive brick barriers, they can come after any valid solution.

+10
source

As a person with mathematical training, and not with CS, I feel obligated to mention this, while the Said Amiri algorithm is very good and will probably work fast enough for N up to several million (with permanent memory, of course) is the best algorithm in terms of time.

I'll pick up where he left:

 f(n+1) = f(n) + g(n-2) g(n+1) = f(n) + g(n) 

Since f and g are discrete functions, you can consider them as sequences. Then it becomes a linear system of recurrence relations. Fortunately, such a system can be completely solved, so that we can imagine the explicit form of f and g.
Unfortunately, SO doesn't seem to support MathJax like math.SE, so I apologize for the poor quality of the equations from here.
Let be

  |  f (n) |
      | f (n-1) |
 u (n) = | f (n-2) |
      |  g (n) |
      | g (n-1) |
      | g (n-2) |

That is, u (n) is a vector string. Then the following is true:

  | f (n + 1) |  | 1 0 0 0 0 1 |  |  f (n) |
 |  f (n) |  | 1 0 0 0 0 0 |  | f (n-1) |
 | f (n-1) |  = | 0 1 0 0 0 0 |  .  | f (n-2) |
 | g (n + 1) |  | 1 0 0 1 0 0 |  |  g (n) |
 |  g (n) |  | 0 0 0 1 0 0 |  | g (n-1) |
 | g (n-1) |  | 0 0 0 0 1 0 |  | g (n-2) |

It follows that u(n) = A * u(n-1) , where A is the matrix above.
Then u(n) = (A^(n-2)) * u(2) , where u(2) is the vector containing the initial values ​​for the problem. This, in turn, gives the O(log(n)) complexity algorithm, since you can use fast exposure to compute (A^(n-2)) , and then multiply it by u(2) .

Of course, any such method is likely to require BigInt, since otherwise overflow is almost guaranteed.

Also note that this method can be applied one more step:
You can find the eigenvectors and eigenvalues ​​of A, and then decompose u(2) into eigenvectors. Then you will have a closed form for both f (n) and g (n).

I highly recommend you use a closed-form algorithm
This will almost certainly include high-precision floating-point calculations (if all eigenvalues ​​are not integers, which is unlikely), which at least have such programming complexity and are not usually constant-time operations. Of course, BigInt operations are not performed. Thus, a constant-time algorithm is usually not possible, plus you probably don't even need O(log(n)) , since linearity is good enough for most applications.

Note
The technique described here can be used in many problems and is extremely useful in dynamic optimization problems. Plus, usually people are very impressed when they see it for the first time;)

+5
source

Source: https://habr.com/ru/post/914612/


All Articles