I play a little with this at the moment, therefore, from what I understood the easiest way:
- You are requesting the use of a user's webcam for video.
- When permission is given, create a canvas in which to place the video.
- You are using a filter (black and white) in the video.
- you put some control points in the canvas frame (a small area where all the pixel colors in it are registered)
- you start attaching a function for each frame (for clarification, I will demonstrate only gestures left and right)
In each frame:
- If the frame is first (F0) continue
- If not: we subtract the current frame pixels (Fn) from the previous
- if there was no movement between Fn and F (n-1), all pixels will be black
- if there is, you will see the difference Delta = Fn-F (n-1) as white pixels
- Then you can check your control points for which areas are highlighted and store them (**) x = DeltaN
Repeat the same process until you have two or more Deltas variables, and you subtract the DeltaN breakpoints from the Delta breakpoints (n-1) and you have a vector
- (**) x = DeltaN
- (**) x = Delta (N-1)
- (+2) x = DeltaN - Delta (N-1)
Now you can check if the vector is positive or negative, or check if the values are superior to any values of your choice.
if positive on x and value > 5
and trigger the event, then listen to it:
$(document).trigger('MyPlugin/MoveLeft', values) $(document).on('MyPlugin/MoveLeft', doSomething)
You can significantly increase accuracy by caching vectors or adding them, and only trigger an event when the vector values become a reasonable value.
You can also expect the shape at the first subtraction and try to display the “hand” or “field”, and listen to changes in the coordinates of the figure, but remember that the gestures are in 3D, and 2D analysis, so the same shape can change when moving.
Here is a more accurate explanation . Hope my explanation helped.
source share