I am on iOS, but my application does something very similar.
As I reached it, it is based on some sample code from Apple (see, in particular, RippleModel.m). How it works is that it does not place the video clip on the ATV, but on a heavily tessellated grid, so you have a ton of triangles with a ton of texture coordinates. It creates the program code for the vertices of this grid β and more importantly, it also creates program coordinates β and holds them in an array.
For each frame, it iterates over all the vertices and updates the texture coordinates for each, βdeformingβ them according to the ripple pattern, based on where the user touched, and depending on how much the texture shifts around the surrounding vertices. Thus, the geometry does not change at all, and they do not perform deformation in the shader, all this is done in the texture coordinates; then the shader simply searches for the direct texture based on the received coordinates.
It's so hard to say if this approach will work for your needs, but if your deformations only happen in 2d, and if you can understand how to define your warp as adjusting the texture coordinates, this can help.
source share