I do not believe algorithms where you can freely choose between O (1) with a large constant and O (logN) really exists. If at the beginning there are N elements for work, it is simply impossible to do this O (1), the only thing that is possible is to move N to some other part of your code.
What I'm trying to say is that in all the real cases that I know, you have a trade-off between space and time or some kind of preprocessing, such as compiling the data into a more efficient form.
That is, you really don't go O (1), you just move part N to another place. Either you exchange the performance of some part of your code with a certain amount of memory, or you exchange the performance of one part of your algorithm to another. To stay sane, you should always look at the larger image.
I want to say that if you have N elements, they cannot disappear. In other words, you can choose between inefficient algorithms O (n ^ 2) or worse, and O (n.logN): this is a real choice. But you never go O (1).
I am trying to point out that for every problem and condition of the source data there is a "better" algorithm. You can do worse, but never better. With some experience, you can have a good idea of what this complex complexity is. Then, if your general treatment matches this complexity, you know that you have something. You cannot reduce this complexity, but only to move it.
If the problem is O (n), it will not become O (logN) or O (1), you just add some preprocessing so that the overall complexity is unchanged or worse, and maybe a later step to be improved. , , O (N) , O (NLogN), O (1).
? , .. . O (NLogN), O (N).
, , O (1) = O (LogN).
-;-), , O (1) O (LogN) O (LogN) O ( 1). , , - - ( , ..).