Effectively calculate optical flow parameters - MATLAB

I implement partial derivatives of the Horn and Schunk paper equations for optical flow. However, even for relatively small images (320x568), it takes quite a long time to complete (~ 30-40 seconds). I assume this is due to loop iterations of 320 x 568 = 181760, but I cannot find a more efficient way to do this (except for the MEX file).

Is there a way to turn this into a more efficient MATLAB operation (possibly a convolution)? I can understand how to do this as a convolution for It, but not Ixand Iy. I also considered matrix shift, but this only works for Itas far as I can understand.

Does anyone else run into this problem and find a solution?

My code is below:

function [Ix, Iy, It] = getFlowParams(img1, img2)

% Make sure image dimensions match up
assert(size(img1, 1) == size(img2, 1) && size(img1, 2) == size(img2, 2), ...
    'Images must be the same size');
assert(size(img1, 3) == 1, 'Images must be grayscale');

% Dimensions of original image
[rows, cols] = size(img1);
Ix = zeros(numel(img1), 1);
Iy = zeros(numel(img1), 1);
It = zeros(numel(img1), 1);

% Pad images to handle edge cases
img1 = padarray(img1, [1,1], 'post');
img2 = padarray(img2, [1,1], 'post');

% Concatenate i-th image with i-th + 1 image
imgs = cat(3, img1, img2);

% Calculate energy for each pixel
for i = 1 : rows
    for j = 1 : cols
        cube = imgs(i:i+1, j:j+1, :);
        Ix(sub2ind([rows, cols], i, j)) = mean(mean(cube(:, 2, :) - cube(:, 1, :)));
        Iy(sub2ind([rows, cols], i, j)) = mean(mean(cube(2, :, :) - cube(1, :, :)));
        It(sub2ind([rows, cols], i, j)) = mean(mean(cube(:, :, 2) - cube(:, :, 1)));
    end
end
+4
source share
2 answers

2D convolution- This is the path that can also be predicted in the question to replace those heavy calculations mean/average. In addition, these iterative differentiations can be replaced by MATLABdiff . Thus, given all this, a vectorized implementation will be -

%// Pad images to handle edge cases
img1 = padarray(img1, [1,1], 'post');
img2 = padarray(img2, [1,1], 'post');

%// Store size parameters for later usage
[m,n] = size(img1);

%// Differentiation along dim-2 on input imgs for Ix calculations
df1 = diff(img1,[],2)
df2 = diff(img2,[],2)

%// 2D Convolution to simulate average calculations & reshape to col vector
Ixvals = (conv2(df1,ones(2,1),'same') + conv2(df2,ones(2,1),'same'))./4;
Ixout = reshape(Ixvals(1:m-1,:),[],1);

%// Differentiation along dim-1 on input imgs for Iy calculations
df1 = diff(img1,[],1)
df2 = diff(img2,[],1)

%// 2D Convolution to simulate average calculations & reshape to col vector
Iyvals = (conv2(df1,ones(1,2),'same') + conv2(df2,ones(1,2),'same'))./4
Iyout = reshape(Iyvals(:,1:n-1),[],1);

%// It just needs elementwise diffentiation between input imgs.
%// 2D convolution to simulate mean calculations & reshape to col vector
Itvals = conv2(img2-img1,ones(2,2),'same')./4
Itout = reshape(Itvals(1:m-1,1:n-1),[],1)

Advantages with such a vectorized implementation:

  • Memory efficiency: there is no longer concatenation in the third dimension that would entail a lack of memory. Again, performance would be useful since we would not need to index such heavy arrays.

  • diff, .

  • , .

+3

( , ):

function [Ix, Iy, It] = getFlowParams(imNew,imPrev)

gg = [0.2163, 0.5674, 0.2163]; 
f = imNew + imPrev; 
Ix = f(:,[2:end end]) - f(:,[1 1:(end-1)]); 
Ix = conv2(Ix,gg','same');

Iy = f([2:end end],:) - f([1 1:(end-1)],:); 
Iy = conv2(Iy,gg ,'same');

It = 2*conv2(gg,gg,imNew - imPrev,'same');

.

, H & S, Lucas Kanade . grad3D.m. grad3Drec.m, , .

+2

Source: https://habr.com/ru/post/1621000/


All Articles