I am trying to use the MCS (Multi classifier) โโsystem to improve my work with limited data and then become more accurate.
I am currently using K-mean clustering, but you can go with FCM (Fuzzy c-means) so that the data is grouped into groups (clusters), the data can represent anything, like colors. First, I group the data after pre-processing and normalization and get several different clusters with a large gap between them. Then I continue to use clusters as data for the Bayes classifier, each cluster is a separate color, and the Bayes classifier is trained, and the data from the clusters is then transmitted through separate Bayes classifiers. Each Bayes classifier is trained in only one color. If we accept the color spectrum 3 - 10 as blue, 13 - 20 as red, and the spectrum between 0 - 3 will be white to 1.5, and then gradually change blue to 1.5 - 3 and the same - from blue to red.
What I would like to know is how or what aggregation method (if that's what you would use) can be applied so that the Bayes classifier can become stronger and how does it work? Can the aggregation method already know the answer or is it a human interaction that corrects the results, and then these answers are returned to the Bayes training data? Or a combination of both? Looking at the Bootstrap aggregation, this is due to the fact that each model in the ensemble votes with the same weight, so I'm not entirely sure about this particular case, would I use a bag as my aggregation method? However, reinforcement involves the gradual construction of an ensemble by training each instance of a new model to emphasize examples of training that previous models were incorrectly classified, but I'm not sure that this will be the best alternative to the bag, because I'm not sure how it is gradually based on new ones copies? And the last would be averaging over the Bayesian model, which is an ensemble technique that seeks to approximate the Bayesian optimal classifier by selecting hypotheses from the hypothesis space and combining them using Bayesian law, but completely unsure of how you could select hypotheses from the search space ?
I know that you usually use a competitive rejection approach between two classification algorithms, which say that one says that weighting can be applied, and if it is correct, you will get the best of both classifiers, but for the sake of preservation I do not want a competitive approach .
Another question is to use these two methods together in this way, it would be useful, I know that the example I have given is very primitive and may not be used in this example, but whether it can be useful in more complex data.