IOS: reproduction of animation on a grayscale in individual color

I have a 32x gray animation of a diamond exploding into pieces (i.e. 32 PNG images @ 1024x1024)

my game consists of 12 separate colors, so I need to do the animation in any color I want

This, I believe, excludes any Apple structure, it also excludes a lot of common code for animating frame-by-frame in iOS.

What are my potential solutions?

these are the best SO links I've found:

that the latter shows that it may be possible to load the image into the GL texture of each frame (it does this from the camera, so if I have everything that is stored in memory, it should be even faster)

I can see these parameters (the list of the laziest, the most optimized last)

option A each frame (kindly provided by CADisplayLink), loads the corresponding image from the file into the texture and displays this texture

I'm sure this is stupid, so on option B

option B preload all images into memory then, as described above, only we load from memory, not from a file

I think this will be the perfect solution, can someone give him thumbs or thumbs down?

option C preload all my PNGs into a single GL texture of maximum size, creating an Atlas texture. each frame, set the texture coordinates to a rectangle in the Atlas for this frame.

while this is a potentially perfect balance between coding efficiency and work efficiency, the main problem here is loss of resolution; on older iOS devices, the maximum texture size is 1024x1024. if we interrupt 32 frames into this (actually it is the same as cramming 64), we will have 128x128 for each frame. if the resulting animation is close to the full screen iPad, this is not going to crack it.

option D instead of loading into one GL texture, load a bunch of textures in addition, we can compress 4 images into one texture using all four channels

I am afraid that simple coding is required here. My RSI starts to stutter, even thinking about this approach

I think I answered my question here, but if someone really did it or can see the way, answer!

+1
source share
2 answers

If you need better performance than (B), it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml

Instead of pulling one frame at a time from memory, we could arrange 16 512x512x8-bit grayscale frames adjacent to the memory, send them to GL as one RGBA texture with a resolution of 1024x1024x32bit, and then split it into GL using the above function.

This means that we perform one transmission [RAM-> VRAM] for 16 frames, and not for one frame.

Of course, for more modern devices, we could get 64 instead of 16, since more modern iOS devices can handle 2048x2048 textures.

First I will try the technique (B) and leave it on this if it works (I do not want to use the code), and look at this if necessary.

I still can’t find a way to ask how many GL textures can be held on the graphics chip. I was told that when you try to allocate memory for a texture, GL just returns 0 when it runs out of memory. however, to implement this correctly, I would like to make sure that I am not swimming next to the re: resources wind ... I do not want my animation to use so much VRAM that the rest of my rendering fails ...

0
source

You could get the job done in full with the CoreGraphics APIs, there is no reason to dive deep into OpenGL for a simple two-dimensional problem like this. For a general approach that you must take to create color frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix . Basically, you need to use CGContextClipToMask () and then render a specific color so that the remaining one is a diamond painted in a specific color. You can do this at runtime, or you can do it offline and create 1 video for each of the colors you want to support. It’s easier on your processor if you perform the operation N times and save the results in files, but modern iOS equipment is much faster than before. Beware of memory problems when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer describing the problem space. You could combine all this with texture atlases and complex openGL materials, but the approach that uses the video will be much easier to handle and you won’t have to worry about resource use so much, see My library in memory message for more information. if you are interested in saving time on implementation.

0
source

Source: https://habr.com/ru/post/1236037/


All Articles