Creating events for gesture-controlled sites

I am very glad that I have the opportunity to work on a gesture-based website. I have several reasons for this: link

I visited many websites and looked for it, Wikipedia and gitHub also did not help much. Little information exists because these technologies are in their infancy. I think I will have to use some js for this project

  • gesture.js (our custom javascript code)
  • show.js (Work with the slide show frame)

My questions are: how do gestures generate events, how does my JavaScript interact with my webcam? Should I use some APIs or algorithms?

I am not asking for a code. I am just asking for a mechanism, or some links providing important information. I seriously believe that if the accuracy of this technology can be improved, this technology can work wonders in the near future.

+4
source share
2 answers

To enable gesture interactions in a web application, you can use navigator.getUserMedia () to receive video from your local webcam, periodically place video frame data in the canvas element , and then analyze the changes between frames.

There are several libraries and demonstrations of JavaScript gestures (including a beautiful slide controller ). You can use libraries like headtrackr.js to track face / head: an example in simpl.info/headtrackr .

+2
source

I play a little with this at the moment, therefore, from what I understood the easiest way:

  • You are requesting the use of a user's webcam for video.
  • When permission is given, create a canvas in which to place the video.
  • You are using a filter (black and white) in the video.
  • you put some control points in the canvas frame (a small area where all the pixel colors in it are registered)
  • you start attaching a function for each frame (for clarification, I will demonstrate only gestures left and right)

In each frame:

  • If the frame is first (F0) continue
  • If not: we subtract the current frame pixels (Fn) from the previous
    • if there was no movement between Fn and F (n-1), all pixels will be black
    • if there is, you will see the difference Delta = Fn-F (n-1) as white pixels
  • Then you can check your control points for which areas are highlighted and store them (**) x = DeltaN

Repeat the same process until you have two or more Deltas variables, and you subtract the DeltaN breakpoints from the Delta breakpoints (n-1) and you have a vector

  • (**) x = DeltaN
  • (**) x = Delta (N-1)
  • (+2) x = DeltaN - Delta (N-1)

Now you can check if the vector is positive or negative, or check if the values ​​are superior to any values ​​of your choice.

if positive on x and value > 5 

and trigger the event, then listen to it:

 $(document).trigger('MyPlugin/MoveLeft', values) $(document).on('MyPlugin/MoveLeft', doSomething) 

You can significantly increase accuracy by caching vectors or adding them, and only trigger an event when the vector values ​​become a reasonable value.

You can also expect the shape at the first subtraction and try to display the “hand” or “field”, and listen to changes in the coordinates of the figure, but remember that the gestures are in 3D, and 2D analysis, so the same shape can change when moving.

Here is a more accurate explanation . Hope my explanation helped.

+2
source

Source: https://habr.com/ru/post/1485269/


All Articles