In the algorithms presented in the article by Bringmann16 , it is proposed to use Boolean convolution to obtain the syntax for two sets of natural numbers.
In the above work, both sets are presented as bitmasks - indicator vectors . If you imagine them as polynomials in the form 1 + bit(0) * x + bit(1) * x^2 + ... + bit(i) * x^(i + 1) + ... + bit(n - 1) * x^n, then their product contains only those monomials whose power is equal to either the first set, or the second set or syntax. The coefficients of the product are not relevant for the problem of summing subsets. Their values simply indicate how many ways to get a number (the degree of the corresponding monomial) as the sum of the elements from the first and second sets or, possibly, 0. The value of any coefficient limited by the large size of both sets ( s).
To convert the task of multiplying polynomials into the task of multiplying large vectors (indicator vectors), it is necessary to add log(s)zero bits after each bit of the indicator vector. If the bits are (log(s) + 1) * i ... (log(s) + 1) * (i + 1) - 1not all clear in the bits of the product, then the corresponding sum = is possible i.
For large multiplication algorithms (Karatsuba-like or FFT-based), it gives an additional logarithmic coefficient for the size of the problem. I want to exclude the logarithmic addition.
I think this is possible if I use a simple tutorial ijkto multiply two integers. I just need to use logical OR instead of ADD to summarize. But this algorithm has a quadratic execution complexity.
ORing FFT, Schönhage-Strassen?