How to use Kalman filter to predict gps positions between measures

I studied the implementation of the OpenCV Kalman filter and did some basic mouse pointer simulations and understood the basic ones. I seem to have missed a few key points for using it in my application and was hoping that someone here could provide a small example.

Using a simple model with speed and position:

KF.statePre.at<float>(0) = mouse_info.x; KF.statePre.at<float>(1) = mouse_info.y; KF.statePre.at<float>(2) = 0; KF.statePre.at<float>(3) = 0; KF.transitionMatrix = (Mat_<float>(4, 4) << 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1); setIdentity(KF.measurementMatrix); setIdentity(KF.processNoiseCov, Scalar::all(1e-2)); setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1)); setIdentity(KF.errorCovPost, Scalar::all(.1)); 

I can make a prediction

 Mat prediction = KF.predict(); 

and I can make a correction

 Mat estimated = KF.correct(measurement); 

Performing this in a loop, I do not fully understand what prediction, evaluation, and measurement mean.

Due to the fact that the measurement is the value of "truth", distorted by some equity. Say the latitude and longitude of a GPS. I think this video shows interesting ideas. https://www.youtube.com/watch?v=EQD0PH09Jvo . It uses a GPS measurement unit, which is updated at a frequency of 1 Hz, and then uses a Kalman filter to predict values ​​at a frequency of 10 Hz.

What would such a setup look like? Is the following example a way to do this?

 Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat estimated = KF.correct(measurement); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat prediction = KF.predict(); Mat estimated = KF.correct(measurement); 

I am not sure whether it is possible to predict 9 values, and then, as the 10th, to provide a measurement.

I am logging data with which I would like to test. The recorded data is gps data in 1hz in a file, where each line is: timestamp: lat: lng, and separately a series of events is recorded in 15hz in a separate file: timestamp: eventdata.

I would like to use the kalman filter to simulate data collection and predict the position of all eventdata timestamps based on 1hz measurements. Due to the fact that a full example of this with opencv would be nice, but only start pointers to the above questions with predictability and correctness are acceptable.

Update

I tried.

 int main() { char * log = "C:/dev/sinnovations/opencv_examples/kalman_tracking/data/geosamples.txt"; char * log1 = "C:/dev/sinnovations/opencv_examples/kalman_tracking/data/geosamples1.txt"; ifstream myReadFile(log); ofstream myfile(log1); myfile.precision(15); KalmanFilter KF(4, 2, 0,CV_64F); Mat_<double> state(4, 1); Mat_<double> processNoise(4, 1, CV_64F); Mat_<double> measurement(2, 1); measurement.setTo(Scalar(0)); KF.statePre.at<double>(0) = 0; KF.statePre.at<double>(1) = 0; KF.statePre.at<double>(2) = 0; KF.statePre.at<double>(3) = 0; KF.transitionMatrix = (Mat_<double>(4, 4) << 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1); // Including velocity KF.processNoiseCov = (cv::Mat_<double>(4, 4) << 0.2, 0, 0.2, 0, 0, 0.2, 0, 0.2, 0, 0, 0.3, 0, 0, 0, 0, 0.3); setIdentity(KF.measurementMatrix); setIdentity(KF.processNoiseCov, Scalar::all(1e-4)); setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1)); setIdentity(KF.errorCovPost, Scalar::all(.1)); std::vector < cv::Point3d > data; while (!myReadFile.eof()) { double ms; double lat, lng, speed; myReadFile >> ms; myReadFile >> lat; myReadFile >> lng; myReadFile >> speed; data.push_back(cv::Point3d(ms, lat, lng)); } std::cout << data.size() << std::endl; for (int i = 1, ii = data.size(); i < ii; ++i) { const cv::Point3d & last = data[i-1]; const cv::Point3d & current = data[i]; double steps = current.x - last.x; std::cout << "Time between Points:" << current.x - last.x << endl; std::cout << "Measurement:" << current << endl; measurement(0) = last.y; measurement(1) = last.z; Mat estimated = KF.correct(measurement); std::cout << "Estimated: " << estimated.t() << endl; Mat prediction = KF.predict(); for (int j = 0; j < steps; j+=100){ prediction = KF.predict(); myfile << (long)((last.x - data[0].x) + j) << " " << prediction.at<double>(0) << " " << prediction.at<double>(1) << endl; } std::cout << "Prediction: " << prediction.t() << endl << endl; } return 0; } 

but something is missing, as the results can be seen in the picture. The red circles are the registered value, and the blue line is the predicted values ​​for the first coordinate of the values ​​of the Latin value:

I do not think that I am processing time values ​​for observations, and the prediction of values ​​is correct.

enter image description here

+5
source share
2 answers

Let's first look at what you said about the world in your model:

 float dt = 1; KF.transitionMatrix = (Mat_<float>(4, 4) << 1, 0, dt, 0, 0, 1, 0, dt, 0, 0, 1, 0, 0, 0, 0, 1); 

Here I changed your transition matrix to describe in more detail the fact that you encoded a fixed time interval dt = 1 . In other words, x_next = x + dt * x_vel . The many explanations of the Kalman filter mean that F (the typical name of the transition matrix) is a constant. In this case, you will need to update the time-dependent F conditions at any time when your timestep has changed.

 setIdentity(KF.errorCovPost, Scalar::all(.1)); 

You have initialized the error covariance of your state vector to I * 0.1 . This means that the difference in each of your initial guesses is 0.1. The difference is written as the "sigma square" because it is the square of the standard deviation. We use the variance here because of the good property that the variance of the sum of two random variances is the sum of their variances (in short: adding dispersions).

So, the standard deviation of each of your initial guesses is sqrt(0.1) ~= 0.3 . Therefore, in all cases, you say that in 68% of cases your initial inputs are within +/- 0.3 units and 95% of the time within +/- 0.6 units. If this jump is confusing, study the normal distribution.

Are these reasonable statements? You know your initial mouse position exactly (I suppose), so your initial error covariance for x and y is probably closer to 0. Although 95% are sure that your initial position is within +/- 0.6 pixels, pretty close . You also say that the mouse moves with +/- 0.6 pixels / dt. Suppose your actual dt was something like 1/60 of a second. This means that you are 95% sure that the mouse moves at a speed slower than 0.6*60 = 36 pixels/sec , which takes about a minute to cross a regular monitor. Therefore, you are very sure that the initial speed of the mouse is very slow.

Why is all this so important? Why did I spend so much time on them? Because the only “magic” of the Kalman filter is that it will weigh your measurements against your predicted state based on the fraction of error in each. Your Kalman filter is no worse than your error estimates.

You can very easily improve your estimate of variance for mouse speed: just write down a bunch of typical mouse movements and calculate the variance of velocities (this is trivial: it's just the average of the squares of the average. In this case, since you force the average to 0 to initialize your state, it makes sense to make this assumption measure, so you just take the average value of the squares of the velocity over the sampling period)

 setIdentity(KF.processNoiseCov, Scalar::all(1e-2)); 

Here you also initialized the diagonal matrix. Let's evaluate your statement: ignoring the actual position of the mouse, you lose your faith in the system for a while. Deviations are added (see above), so the Kalman filter can simply be added to processNoiseCov (usually Q ) to represent this erosion. Therefore, taking into account your knowledge of mouse movement due to speed, you are 95% sure that your predicted position is within +/-2 * sqrt(1e-2) = +/-0.2 pixels. Therefore, not knowing anything about what the user has done (for now: we are not yet at the measurement stage), we are very confident in our constant speed model. And since the same logic claims that the speed is 95% guaranteed, it does not change within 0.2pixels / dt (which can be a smoother mouse movement than physically possible), you tell the Kalman filter that you are really sure that the model is correct and should be trusted a lot.

 setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1)); 

I am not going to violate this, except to indicate that:

  • Your noise covariance for your actual measured mouse position is higher than your process noise covariance. In other words, you believe that the mouse moves in a straight (estimated) line more than you think that you insert the mouse position.
  • Your belief in the accuracy of your actual mouse positions is not as strong as it should be: you are not using something “fuzzy” like GPS or camera tracking. You are given the actual silent mouse coordinate. Your true NoiseCovariance (usually R ) dimension is 0.

Now, what is the effect of setting R = 0? This means that the proportion of error in the position at step KF.correct(measurement) will be 100% assigned to your model. Thus, K (Kalman coefficient) will take 100% of the measurement and replace your x, y mouse position with a new value. This is essentially correct. If you replace the GPS mouse, then your R! = 0, and you will mix the new state with some old state. But in your real system (if you do not intentionally introduce simulated noise) you know your actual mouse position for sure. If you add a simulated noise, you can put its exact deviation in R, and you get the optimal state estimate, as promised.

Thus, at 100% measurement sounds pretty boring in terms of assessing the condition. But one of the magic bits of the Kalman filter is that knowing the exact position will apply back to other terms (in this case, estimated speed), because the filter knows your transitionMatrix (aka F ), so that it knows how the poor speed condition affected on the state of the state (now known to be bad).

Now, to answer your specific question:

What would such a setup look like? Is the following example a way to do this?

It seems to work for this particular implementation. Usually you run the Kalman filter in a predictive + correct loop (usually called an “update”). The final state of the output is, in fact, a posterior evaluation after correction. In some implementations, the prediction function may not update the final state at all, which will break your cycle.

Someone else pointed to your double prediction.

What you will find in the literature is that your approach is not often described. This is mainly due to the fact that a lot of work was done to assess the state in the 60s and 70s, when the cost of starting the filter at a high speed was too high. Instead, the idea was to update the filter only at a low speed (1 Hz in your example). This is done by using the error state vector in position (both speed and acceleration in highly dynamic systems), and only the prediction step + updates at slow speed, meanwhile, using predicted error corrections continuously at high speed. This form is called an indirect Kalman filter. You will also find articles claiming to be archaic, rude-cultural programming that gives slightly lower results for a direct approach. Even if this is true, this does not change the fact that any Google search on location tracking is likely to use an indirect form that is harder to understand.

+8
source

You call pred () before going through the loop through j. As a result, you made 2 requests () before printing to myfile in a loop. This leads to an inflection in the blue line near the measurements. I would delete the call before the loop.

The prediction step is sometimes called filter propagation, which gives a clearer picture of what is actually happening: you move the filter forward in time. In your case, the filter is distributed every time for 1 step of 100 ms.

+2
source

Source: https://habr.com/ru/post/1201975/


All Articles