From some sources I hear that Generative adversarial networks is an uncontrolled ML, but I don't understand it. Are generative adversarial networks virtually uncontrollable?
1) 2-Class Real-vs-Fake Case
Indeed, you need to provide training data for the discriminator, and this should be "real" data, that is, data that I would call fe 1. Although no one marks the data explicitly, this is done implicitly, introducing the discriminator in the first steps with the training data that you say that the discriminator is genuine. Thus, you somehow tell the discriminator about the marking of training data. Conversely, marking the noise data generated in the first steps of the generator, which the generator knows that they are unreliable.
2) Multi-class case
But this is very strange in the case of several classes. Descriptions must be provided in the training data. The obvious contradiction is that it provides an answer to the uncontrolled ML algorithm.
source
share