I need to make an algorithm for calculating the integral by the Monte Carlo method, and for modeling purposes, I need to calculate the standard deviation of the sample generated in my program. My problem is that when I increase the number of elements in my sample, my standard deviation does not break up, as you would expect. At first I thought that my function was wrong, but using the predefined numpy function to calculate the standard deviation, I saw that the values were the same and did not decrease as I expected. So I wondered what was the error in my sample, so I did the following simulation to check if the standard deviation was decreasing, as it should be:
list = [random.uniform(0,1) for i in range(100)]
print np.std(list)
received standard deviation: 0.289
list = [random.uniform(0,1) for i in range(1000)]
print np.std(list)
received standard deviation: 0.287
Doesn't that decrease as my n increases? Because I need this to be used as a stopping criterion in my simulation, and I was excluded from this to reduce with a large sample. What is wrong with my mathematical concept?
Thanks in advance!
source
share