They all seem more complex (and / or less efficient) than they should be. Why not only this:
dupli [] = [] dupli (x:xs) = x:x:(dupli xs)
Your last example is close to a good implementation based on addition, but you should use foldr , which saves you from having to undo the result:
dupli = foldr (\x xs -> x:x:xs) []
In terms of performance measurement, an “empirical approach” is profiling . As Haskell programs grow in size, they can be quite difficult to reason in terms of execution complexity and space, and profiling is your best bet. In addition, a crude but often effective empirical approach to measuring the relative complexity of two functions is simply to compare how much time they take to a sufficiently large input; for example, how long length $ dupli [1..1000000] takes and compares it with dupli'' , etc.
But for a program, this small value should not be too complicated to determine the complexity of the algorithm based on your knowledge of the data structure (s) in question - in this case, lists. Here's a tip: when using concatenation ( x ++ y ), the execution complexity is O ( length x ). If concatenation is used inside a recursive algorithm running on all elements of a list of size n, you will essentially have an O (n ^ 2) algorithm. Both examples I gave, and your last example is O (n), since the only operation used inside the recursive definition is (:) , which is O (1).
source share