One possible approach would be to model delays as an Exponential Distribution . Exponential distribution models the time between events that occur continuously and independently at a constant average speed, which sounds like a fair assumption, given your problem.
You can estimate the lambda parameter by looking at the average of your actual observed delays, and simulate the distribution using this approach , i.e.
delay = -Math.Log (random.NextDouble ()) / lambda
However, looking at your sample, the data looks too “concentrated” around the average to be purely exponential, so simulating this way will cause delays with an appropriate average, but too common to fit your sample.
One way to solve this problem is to model the process as a shifted exponent; essentially, the process is shifted by a value that represents the minimum that can take a value, instead of 0 for the exponent. In the code, taking the shift as the minimum observed value from your sample, it might look like this:
var sample = new List<double>() { 96, 95, 112, 111, 119, 104, 143, 96, 95, 104, 120, 112 }; var min = sample.Min(); sample = sample.Select(it => it - min).ToList(); var lambda = 1d / sample.Average(); var random = new Random(); var result = new List<double>(); for (var i = 0; i < 100; i++) { var simulated = min - Math.Log(random.NextDouble()) / lambda; result.Add(simulated); Console.WriteLine(simulated); }
A trivial alternative that essentially resembles Aidan's approach is to re-select: select random elements from your original sample, and the result will have exactly the required distribution:
var sample = new List<double>() { 96, 95, 112, 111, 119, 104, 143, 96, 95, 104, 120, 112 }; var random = new Random(); var size = sample.Count(); for (var i = 0; i < 100; i++) { Console.WriteLine(sample[random.Next(0, size)]); }