Improving camshift algorithm in open cv

I use camshift algorithm for opencv to track objects. The input is from a webcam, and the subject is tracked between consecutive frames. How can I make tracking stronger? If I move the object at high speed, tracking will fail. Also, when the object is not in the frame, false positives appear. How to improve it?

+4
source share
3 answers

Object tracking is an active area of โ€‹โ€‹research in the field of computer vision. There are many algorithms for this, and none of them work in 100% of cases.

If you need to track in real time, you need something simple and fast. I assume that you have a way to segment a moving object from the background. Then you can calculate the representation of the object, for example, a histogram of the color, and compare it with the object that you will find in the next frame. You must also make sure that the subject has not moved too far between frames. If you want to try more advanced motion tracking, then you should look at the Kalman filter.

Determining that an object is not in the frame is also a big problem. First, what objects are you trying to track? People? Cars? Dogs? You can build an object classifier that tells you whether the moving object in the frame is an object of interest, as opposed to noise or some other object. The classifier can be something very simple, for example, a size limit, or it can be very difficult. In the latter case, you need to learn about the possibilities that can be calculated, classification algorithms, such as vector support machines, and you will need to collect training images for training.

In short, a reliable tracker is not easy to build.

+2
source

Suppose you find an object in the first two frames. From this information, you can extrapolate where you expect the object in the third frame. Instead of using the universal find-the-object algorithm, you can use a slower, more complex (and therefore reliably more reliable) algorithm, limiting it to checking in the neighborhood that extrapolation predicts. Perhaps this is not quite what you expect (perhaps the speed vector is changing), but you can certainly reduce the area that has been tested.

This should help reduce the number of cases where any other part of the frame is mistakenly identified as an object (because you are looking at a smaller part of the frame and because you are using a more efficient detector).

Update the extrapolations based on what you find and iterate for the next frame.

If the object is out of scope, you return to your common feature detector, as is the case with the first two frames, and try to get a โ€œlockโ€ when the object returns to the view.

Also, if you can, throw as much light as possible into the physical scene. If the scene is dim, the webcam will use a longer exposure time, which will lead to more motion blur on moving objects. Motion blur can make it very difficult for a function detector (although it can give you information about direction and speed).

+1
source

I found that if you expand the border of the search box in camShift, this makes the algorithm more adaptive to fast-moving objects, although it can introduce some irregularities. just try to make your window border 10% bigger and see what happens.

0
source

Source: https://habr.com/ru/post/1339054/


All Articles