When you say, โMissing lists have a higher chance of accidentally providing poor results for small datasets,โ what exactly are you afraid of?
What you should not be afraid of is that for a list of 10 items there are not enough level 2 or 3 nodes to speed up the crawl. The difference between iterating a linked list (which is what knocks down a skiplist without level 2+ nodes) of 2 elements or 10 elements, in principle, does not exist even in hard loops (the type of reference administration node required by your data structure will probably have greater influence).
Obviously, as soon as you move on to more elements, the effect of the absence of nodes of a higher level increases. But, fortunately, the likelihood of getting a higher level node also increases. For example, if you add 8 elements, the probability that they will all be level 1 node is 0.5 ^ 8 = 0.39%, that is, it is extremely unlikely.
source share