I saw the second code in another code, and I believe that this length comparison was done to improve code performance. It was used in the parser for the script language with a special dictionary: words from 4 to 24 characters long with an average of 7-8 sheets, the alphabet includes 26 Latin letters plus "@", "$" and "_",
The length comparison was used for the escape == operator, which works with STL strings, which obviously takes longer than a simple integer comparison. But at the same time, the distribution of the first letters in this dictionary is simply wider than the distribution of the size of words, so the first two letters of the string comparison will most often differ than the size of these lines. This makes length comparisons unnecessary.
I conducted several tests, and this is what I found out: during testing of two random strings, the comparison is millions of times, the second method is much faster, so the comparison in length seems useful. But in a working draft, it works even slower in debug mode and in faster mode works faster.
So my question is: why can a length comparison fix the comparison and why can it slow it down?
UPD: I also do not like this second method, but it was done for some reason, I suppose, and I wonder what the reason is.
UPD2: Seriously, the question is not how to do best. In this case, I do not even use STL strings. It is not surprising that length comparisons are neither necessary nor incorrect, etc. Miracle - it really tends to work a little better in one specific test. How is this possible?
source
share