If I understand the question, you ask for algorithms that are theoretically better, but almost worse in all situations. Therefore, one should not expect that they will actually be used, if by mistake.
One possible example is universal memoization . Theoretically, all deterministic function calls should be remembered for all possible inputs. Thus, complex calculations can be replaced by simple table searches. For a wide range of problems, this method productively trades storage time. But suppose there is a central repository of the results of all possible source data for all possible functions used by all computers of mankind. The first time anyone made a calculation, this was the last time. All subsequent attempts will result in a table search.
But there are several reasons why I can think that I am not doing this:
The amount of memory required to store all the results is likely to be incredibly large. Probably the number of bits required will exceed the number of particles in the universe. (But even the task of estimating this number is complicated.)
It would be difficult to build an efficient algorithm to memoirize this huge problem space.
The cost of contacting the central repository is likely to exceed the benefits as the number of customers increases.
I am sure you can think of other issues.
In fact, this kind of compromise between time and space is incredibly widespread in practice. Ideally, all data will be stored in the L1 cache, but due to size limitations, you always need to put some data on disk or (horrors!) Tapes. Advancing technology reduces some of the pain of these trade-offs, but as I suggested above, there are limitations.
In response to a comment by J. F. Sebastian:
Suppose that instead of a universal memoization repository, we consider a factorial repository. And it will not contain results for all possible inputs. Rather, it will be limited to results from 1 to N! . Now it’s easy to understand that any computer that used factorials would benefit from finding the result, and not for computing. Even to calculate (N+1)! the search will be a huge win, as this calculation will be reduced to N!(N+1) .
Now, to make this “better” algorithm worse, we could either increase N or increase the number of computers using the repository.
But I probably do not understand some of the subtleties of the question. They think so about it, I continue to come up with examples that scale well until they do.
Jon Ericson Jan 29 '09 at 19:47 2009-01-29 19:47
source share