On iOS, what's the fastest way to cache a framed image on the screen and display it?

Instead of having to drawRect redraw thousands of dots every time, I think there are several ways to “cache the image on the screen” and any additional drawing, we will add to this image and just show this image when it's drawRect time:

  • Use a BitmapContext and draw into a bitmap, and in drawRect draw this bitmap.

  • Use CGLayer and draw a CGLayer in drawRect , and this can be faster than method 1, since this image is cached on the graphics card (and it will not be taken into account regarding the use of RAM for “memory warning” on iOS?)

  • Draw and use a view layer: view.layer.contents = (id) cgimage;

So, there are apparently three methods, and I think that CALayer in method (3) can only use CGImage to achieve it. CALayer alone cannot cache the image on the screen, and not like CGLayer in (2).

Is method (2) the fastest of all three, and are there other methods that can do this? I actually plan on animating several images on the screen (looping around 5 or 6 of them), and try using CADisplayLink to try out the highest frame rate of 60 frames per second. Will any of the methods (1), (2) or (3) use the memory in the video card and, therefore, do not use RAM and, therefore, is less likely to receive a memory warning from iOS too?

+6
source share
2 answers

Based on the last few questions you asked, it looks like you are completely confusing CGLayers and CALayers. They are different concepts and are not related to each other. CGLayer is a Core Graphics constructor that helps in rendering content multiple times within the Core Graphics context canvas and is limited to a single presentation, raster or PDF context. I rarely had to work with CGLayer.

CALayer is a Core Animation layer, and there is one support for each UIView in iOS (and NSViews with layer support on Mac). You do this all the time on iOS because they are a fundamental part of the user interface architecture. Each UIView is a lightweight wrapper around CALayer, and each CALayer, in turn, is actually a wrapper around a textured square on the GPU.

When displaying a UIView on the screen, you first need to visualize the very first content (or when a full redraw starts). Core Graphics is used to transfer your lines, arcs, and other vector artwork (sometimes including bitmap bitmaps as well) and rasterize them into a bitmap. This bitmap is then downloaded and cached to the GPU through your CALayer.

For changes in the interface, such as viewing, moving, rotating, scaling, etc., these views or layers do not need to be redrawn, which is an expensive process. Instead, they are simply converted to a GPU and assembled in a new location. This is what provides smooth animation and scrolling throughout the iOS interface.

Therefore, you will want to avoid using Core Graphics to redraw anything if you want better performance. Indicate which parts of the scene you can use in CALayers or UIViews. Think about how old-style animations used cels to contain parts of the scene that they were moving, rather than animators redrawing every change in the scene.

You can easily get hundreds of CALayers to seamlessly animate your screen on modern iOS devices. However, if you want to make thousands of points for something like a particle system, you will be better off working by switching to OpenGL ES for this and using GL_POINTS. It will take a lot more code to tune, but it may be the only way to get acceptable performance for the "thousand points" you are asking for.

+9
source

One quick method that allows both caching graphics and modifying the contents of cached graphics is to blur your methods (1) and (3).

(1) Create your own graphic context with bitmap support, draw it and then change it at any time (gradually add one point or thousands of points each time, etc.) as needed. Unfortunately, it will be invisible because there is no way to directly get a bitmap on the display of an iOS device.

So besides that

(3) at a certain frame rate (60 Hz, 30 Hz, etc.), if the bitmap is dirty (has been changed), convert the context of the bitmap image to CGImage and assign this image to the contents of CALayer. This will convert and copy all the memory of the bitmap image into the texture cache of the GPUs (this is the slow part). Then use the kernel animation to do everything you need with a layer (clean it, connect, fly around a window, etc.) to display a texture made from your bitmap. Behind the scenes, Core animations will ultimately allow the GPU to drop the quad, using this texture, onto some composite window tiles that will eventually be sent to the device display (this description probably excludes a whole bunch of steps in the graphics and graphics pipelines). Rinse and repeat as necessary in the start-up cycle of the main interface. My blog post about this method is here .

There is no way to partially modify the contents of the used texture of the GPU. You must either replace it with a new texture load, or create another layer on top of the texture layer. This way, you end up saving twice as much memory used, some in the address space of the processors, some in the texture cache of the GPU.

+5
source

Source: https://habr.com/ru/post/916796/


All Articles