In the worst case, your algorithm is O ((log n) ^ 2).
Suppose you start at 0 (with an interval = 1), and the value you are looking for is in the position 2 ^ n - 1.
First you check 1, 2, 4, 8, ..., 2 ^ (n-1), 2 ^ n. Oops, this is overshoot, so go back to 2 ^ (n-1).
Next, you check 2 ^ (n-1) +1, 2 ^ (n-1) +2, ..., 2 ^ (n-1) + 2 ^ (n-2), 2 ^ (n -1) + 2 ^ (n-1). This last term is 2 ^ n, so it screams, which again overshadows. Back to 2 ^ (n-1) + 2 ^ (n-2).
And so on, until you finally reach 2 ^ (n-1) + 2 ^ (n-2) + ... + 1 == 2 ^ n - 1.
The first exceeded log n steps. The next step is (log n) -1. The following took (log n) - 2 steps. Etc.
So, in the worst case, you followed steps 1 + 2 + 3 + ... + log n == O ((log n) ^ 2).
The best idea, I think, is to switch to traditional binary search as soon as you redo the first time. This will preserve the performance of the O (log n) algorithm in the worst case, although it will be slightly faster when the target is really nearby.
I do not know the name for this algorithm, but I like it. (By a strange coincidence, I could use it yesterday. Indeed.)