There are a lot of problems here.
First of all, are pseudo-random deviations the supposed normal distributions? I assume that they are, since any discussion of correlation matrices becomes unpleasant if we disagree in the abnormal distributions.
Further, it is quite simple to create pseudo-random normal deviations, given the covariance matrix. Generate standard normal (independent) deviations, and then convert, multiplying by the Cholesky coefficient of the covariance matrix. Add to the average at the end if the average is not zero.
And, the covariance matrix is also quite simple to create, given the correlation matrix. Just pre and post multiply the correlation matrix by a diagonal matrix consisting of standard deviations. This scales the correlation matrix into a covariance matrix.
I'm still not sure where the problem is in this matter, since it would seem easy enough to create a “random” correlation matrix with elements uniformly distributed in the desired range.
Thus, all of the above is quite trivial by any reasonable standards, and there are many tools for generating pseudo-random normal deviations, given the above information.
Perhaps the problem is that the user insists that the resulting random deviation matrix should have correlations in the specified range. You must recognize that a set of random numbers will only have the necessary distribution parameters in the asymptotic sense. Thus, since the sample size reaches infinity, you should expect to see the specified distribution parameters. But any small set of samples will not necessarily have the required parameters in the desired ranges.
For example, (in MATLAB) there is a simple positive definite 3x3 matrix here. Thus, it creates a very good covariance matrix.
S = randn(3); S = S'*S S = 0.78863 0.01123 -0.27879 0.01123 4.9316 3.5732 -0.27879 3.5732 2.7872
I convert S to a correlation matrix.
s = sqrt(diag(S)); C = diag(1./s)*S*diag(1./s) C = 1 0.0056945 -0.18804 0.0056945 1 0.96377 -0.18804 0.96377 1
Now I can choose from a regular distribution using the statistics toolkit (mvnrnd should do the trick.) How easy it is to use the Cholesky factor.
L = chol(S) L = 0.88805 0.012646 -0.31394 0 2.2207 1.6108 0 0 0.30643
Now create pseudo-random deviations, then transform them as desired.
X = randn(20,3)*L; cov(X) ans = 0.79069 -0.14297 -0.45032 -0.14297 6.0607 4.5459 -0.45032 4.5459 3.6549 corr(X) ans = 1 -0.06531 -0.2649 -0.06531 1 0.96587 -0.2649 0.96587 1
If your desire was that the correlations should ALWAYS be greater than -0.188, then this sampling method failed because the numbers are pseudo-random. In fact, this goal will be challenging if your sample size is not large enough.
You can use a simple deviation scheme by which you sample, and then repeat it repeatedly until the sample has the required properties, with correlations in the required ranges. It can get tired.
An approach that may work (but one that I haven’t completely conceived at this stage) is to use the standard scheme, as described above, to create a random sample. Calculate the correlations. I cannot lie in the proper ranges and then determine the perturbation that would have to be added to the actual (measured) covariance matrix of your data, so the correlations will be as we would like. Now find the zero mean random perturbation for your sample data that will move the covariance sample matrix in the desired direction.
It might work, but if I don’t know what this is really a question, I won’t go into it. (Edit: I thought about this problem even more, and it seems that the problem of quadratic programming with quadratic constraints finds the least perturbation for matrix X, so the resulting covariance (or correlation) matrix has the desired properties.)