Setting up binary search

In binary search, we divide the array by 2, and then again look inside the separate array, recursively using binary search.

Now, instead of binary search, if I use three-dimensional search, the search will split the array by 3. My questions:

  • Will triple search be faster than binary search or vice versa?
  • Under what conditions will which algorithm work better?
  • Does performance depend on array size?
+4
source share
4 answers

From the Wiki , a triple search tree is a triple tree, where the nodes are in order with respect to the search keys. If the search keys are strings, each node stores one character and the string search consists of a series of binary searches, one for each character in the string:

In binary search, you simply compare and get half or the other.
But, in a triple search, where you compare, if it is less, you get the 1st 1/3rd, compare again, if less, you get the second 1/3rd or get the last 1/3rd.

More details here:

+1
source

Well, it depends on what you mean by โ€œfaster,โ€ for sure. Asymptotically, they work in O (log n) time (technically, a binary search is done in O (log_2 (n)), while a triple search is done in O (log_3 (n)), where log_k means "base base k", they differ only by a constant factor, therefore both of them are equivalent to O (log n)). Thus, from the point of view of algorithms, these two functions are performed at the same time as a whole (i.e., they have the same time complexity).

However, there is, of course, a special case when someone takes less computation than another. For example, if the target value is exactly in the middle of the array, a binary search will return the value in the first iteration and will never be recursive, while the Trojan will have to recount to find the value. Similarly, if the target value is exactly one third of the path through the array, a triple search will find it immediately, while a binary search will have to be recursive.

+1
source

If you are talking about a single-processor system, then there is no asymptotic difference, and there is more overhead with a triple search. In multiprocessor / multicore systems, it (to the point) speeds up the search in order to divide it into independent search queries (provided that the runtime / language allows such a separation). But this is usually best done at the top level only once.

0
source

I have problems defining what you mean by the term search. The reason binary search splits an array into two halves is because each comparison you make divides the array into two areas, elements smaller than the element in question, and elements that are larger than the element in question. I see no easy way to generalize this so that you split the array into three parts by doing one comparison.

However, if you do not divide the array into equal halves and instead divide it by 1/3/2/3 divided by each iteration, then you will still get O (log n) performance, although the constant term will be higher. Indeed, for any constant & epsilon; in which you divided the array into fractions of size and epsilon; / 1-? Epsilon; pieces, you get O (lg n) behavior.

If during the comparison you break the array into two parts of size and epsilon; n and (1-? epsilon;) n, ending only when the size of the array becomes less than one, then the algorithm will end after k steps, where k is the smallest integer for which? epsilon; k n <1 and for which (1? epsilon;) k n <1. Reordering, we get

? epsilon; k n <1

? epsilon; k 1 / n

k> log ? epsilon; 1 / n

k> is log ? epsilon; n

k> is log n / lg epsilon;

k> lg n / lg 1 / ?

Using similar logic, but with 1 - & epsilon, we get that

k> lg n / lg 1 / (1 -? epsilon;)

Note that since lg 1 / & epsilon; and log 1 / (1 - epsilon;) are constants, the smallest k satisfying these properties is O (log n).

0
source

Source: https://habr.com/ru/post/1343504/


All Articles