I would appreciate help on this. I have a classifier that can classify images into a dog or cat with good accuracy. I have a good dataset for classifier training. Not a problem yet.
I have about 20,000 dogs and 20,000 cat images.
However, when I try to present other images, such as a car or a building or a tiger that has neither a dog nor a cat, I would like the classifier output to be "Niether". Now it is obvious that the classifier is trying to classify everything into a Dog or a Cat, which is not true.
Question 1:
How can i achieve this? Do I need to have 3 sets of images that do not contain a dog or cat and train the classifier on these additional images to recognize everything else as “Nothing”?
At a high level, approximately. How many "Not Dog / Cat" images do I need to get good accuracy? There would be about 50,000 images since the non-dog / cat domain was so huge? or do i need more images?
Question 2:
Instead of training my own classifier using my own image data, can I use the Imagenet model VGG16 Keras for the source layer and add DOG / CAT / none of the classifiers on top as a fully connected level?
See this example to download an image preview model.
Many thanks for your help.