Herbert,
if your goal is to compare different learning algorithms, I recommend that you use nested cross validation. (I mean the learning algorithm as various algorithms, such as logistic regression, decision trees and other discriminatory models that study a hypothesis or model - the final classifier - from your training data).
"Regular" cross-validation is great if you like to configure hyperparameters of one algorithm. However, as soon as you start optimizing hyperparameters with the same cross-validation parameters / folds, your performance assessment is likely to be excessive. The reason that you use cross-validation over and over again is that your test data will become, to some extent, “learning data”.
People asked me this question quite often, in fact, and I will take a few excerpts from the FAQ section that I posted here: http://sebastianraschka.com/faq/docs/evaluate-a-model.html
In the nested cross-validation, we have an external k-fold cross-validation loop for dividing the data into training and test folds, and the internal loop is used to select a model using k-fold cross-validation in a fold training. After selecting a model, the test fold is used to evaluate the performance of the model. After we have determined our “favorite” algorithm, we can follow the “regular” cross-validation k-fold approach (on a full set of workouts) to find its “optimal” hyperparameters and evaluate it on an independent test set. Let's look at a logistic regression model to make this clearer: using nested cross-validation, you will train different logistic regression models, 1 for each of the outer folds m, and the inner folds are used to optimize the hyperparameters of each model (for example, using gridsearch in combination with cross-validation k-fold.If your model is stable, these m models should have the same hyperparameter values, and you report the average performance of this model based on external test bends in. The following algorithm, for example, SVM etc.

I can only recommend this wonderful article, which discusses this issue in more detail:
PS: As a rule, you do not need / do not need to configure hyperparameters of a random forest (so wide). The idea of creating random forests (bag shape) should not really cut off decision trees - in fact, one of the reasons why Breiman came up with the Random Forest algorithm was to deal with the problem of trimming / retraining individual decision trees. So the only parameter you really need to worry about is the number of trees (and possibly the number of random functions for the tree). However, as a rule, it is best for you to take training bootstraps with sizes n (where n is the initial number of functions in the training set) and squareroot (m) (where m is the dimension of your training set).
Hope this was helpful!
Edit:
Sample code for entering a nested CV through scikit-learn:
pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state=1))]) param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0] param_grid = [{'clf__C': param_range, 'clf__kernel': ['linear']}, {'clf__C': param_range, 'clf__gamma': param_range, 'clf__kernel': ['rbf']}] # Nested Cross-validation (here: 5 x 2 cross validation) # ===================================== gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=5) scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=2) print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores)))