Reducing o (n ^ 3) C ++ code complexity

I would like to reduce the complexity of the following algorithm. Basically, it takes the word as input and calculates the number of unique letters inside it (the "entropy" of the word). My current solution uses 3 built-in loops that go to complexity o (n ^ 3). Since this code is part of a larger project (we built a solver for a game known as boggle), I was hoping to reduce the complexity of my algorithm to reduce its execution time. Thanks in advance!

int wordEntropy(string word) { int length = word.length(); int uniquewords = length; string compare = word; char save[17]; int cond=0; for (int ii=0; ii < length; ii++) { for (int jj=ii+1; jj < length; jj++) { for (int kk=0; kk<= ii; kk++) { if (save[kk] == word[ii]) {cond++;} } if (word[ii] == word[jj]) { if (cond>0) {break;} uniquewords--; } } save[ii] = word[ii]; cond = 0; } return uniquewords; } 
+5
source share
4 answers

If this really concerns performance, depending on the range of characters allowed, something like this might be faster:

 std::size_t wordEntropy( const std::string & word ) { unsigned char seen[256] = { 0 }; for( unsigned char c : word ) { ++seen[ c ]; } return std::count_if( & seen[0], & seen[ 0 ] + 256, []( unsigned char c ) { return c != 0; } ); } 

But obviously this is a little harder to maintain. This solution has guaranteed O (n) complexity and does not make any dynamic memory allocations.

An alternative version that has no problems if the character occurs more than 255 times:

 std::size_t wordEntropy( const std::string & word ) { bool seen[256] = { false }; for( unsigned char c : word ) { seen[ c ] = true; } return std::count_if( & seen[0], & seen[ 0 ] + 256, []( bool t ) { return t; } ); } 
+9
source

One cheap solution is to simply insert the characters into the unordered_set , which is a hashset (assembled O (1) input and search):

 #include <unordered_set> int wordEntropy(const std::string &word) { std::unordered_set<char> uniquechars(word.begin(), word.end()); return uniquechars.size(); } 

This gives O (n) complexity, which is as good as it is.

+13
source

Performing on-site calculations without additional (and lengthy) memory allocations:

 std::sort(word.begin(), word.end()); auto last = std::unique(word.begin(), word.end()); return last - word.begin(); 
+10
source

If the lines are short, you should worry more about memory allocation than big-O. In any case, here is a faster solution.

Since you mentioned that this is a game with fear, and the input of this function is a string named "word", I assume that you have already confirmed that all the characters in the "word" are characters of the alphabet ascii. If yes, then here is probably the fastest random entropy:

 int word_entropy ( std::string const& word ) { uint32_t bit_map = 0; for ( char const ch : word ) bit_map |= static_cast <uint32_t> ( 1 ) << ( ch & 31 ); return __builtin_popcount ( bit_map ); } 
0
source

Source: https://habr.com/ru/post/1259046/


All Articles