So, I got acquainted with many educational materials about the opening of OpenCV and cascading learning tools. In particular, I am interested in teaching the car classifier using the sample creation tool, but there are still conflicting statements about the -w and -h options, so I'm confused. I mean the command:
$ createsamples -info samples.dat -vec samples.vec -w 20 -h 20
I have the following three questions:
I understand that the aspect ratio of positive samples should be the same as the aspect ratio you get from the -w and -h options above. But should the -w and -h options for ALL positive samples be the same size? For example. I have about 1000 images. Do they all have to be the same size after trimming?
If we are not talking about size, but about aspect ratio, then how accurately should the aspect ratio of positive samples be in comparison with the -w and -h parameters mentioned in OpenCV tools? I mean, the classifier is very sensitive, so even a few pixels here and there will affect its performance? Or you would say that it is safe for working with images, if they are all about the same eye ratio.
I have already cropped several images of the same size. But, trying to make them the same, some of them have a little more background included in the bounding rectangles than others, and some have slightly different fields. (For example, see Two images below. A larger car takes up more images, but there is a wider margin around the small car). I'm just wondering if there is a collection of images like this in order, or if it reduces the accuracy of the classifier, and why should I provide stricter bounding boxes around all the objects of interest (in this case, cars)?
source
share