IOS CGPath Performance

UPDATE

I circumvented the limitations of CG by drawing everything with OpenGL. There are still some glitches, but so far it has worked a lot, much faster.

Some interesting points:

  • GLKView . This view is specific to iOS, and it helps in customizing the OpenGL context and rendering cycle. If you are not on iOS, I am afraid that you yourself.
  • Shader Accuracy : The accuracy of shader variables in the current version of OpenGL ES (2.0) 16-bit . It was a bit small for my purposes, so I emulated 32-bit arithmetic with pairs of 16-bit variables.
  • GL_LINES : OpenGL ES can draw simple lines. Not very good (without joints, without caps, see the Violet-gray line at the top of the screenshot below), but to improve what you have to write a custom shader, convert each line to a triangle strip and pray that this works! (presumably browsers do this when they say that Canvas2D is GPU-accelerated)

Example rendering

  • Draw as little as possible . I believe this makes sense, but you can often avoid rendering objects that, for example, are outside the viewport.
  • OpenGL ES has no support for filled polygons , so you have to decrypt them yourself. Consider using iPhone-GLU : this is the port of the MESA code, and it's pretty good, although it's a bit complicated (without the standard Objective-C interface).

Original question

I am trying to do a lot of CGPaths (usually more than 1000) in the drawRect method of my scroll view, which updates when the user clicks with his finger. I have the same application in JavaScript for the browser, and I'm trying to port it to my native iOS application.

IOS validation code (with 100 linear operations, path is pre-made by CGMutablePathRef ):

 - (void) drawRect:(CGRect)rect { // Start the timer BSInitClass(@"Renderer"); BSStartTimedOp(@"Rendering"); // Get the context CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetFillColorWithColor(context, [[UIColor redColor] CGColor]); CGContextSetStrokeColorWithColor(context, [[UIColor blueColor] CGColor]); CGContextTranslateCTM(context, 800, 800); // Draw the points CGContextAddPath(context, path); CGContextStrokePath(context); // Display the elapsed time BSEndTimedOp(@"Rendering"); } 

In JavaScript, for reference, the code (with 10,000 line operations):

 window.onload = function() { canvas = document.getElementById("test"); ctx = canvas.getContext("2d"); // Prepare the points before drawing var data = []; for (var i = 0; i < 100; i++) data.push ({x: Math.random()*canvas.width, y: Math.random()*canvas.height}); // Draw those points, and write the elapsed time var __start = new Date().getTime(); for (var i = 0; i < 100; i++) { for (var j = 0; j < data.length; j++) { var d = data[j]; if (j == 0) ctx.moveTo (dx, dy); else ctx.lineTo(dx,dy) } } ctx.stroke(); document.write ("Finished in " + (new Date().getTime() - __start) + "ms"); }; 

Now Iā€™m much better at optimizing JavaScript than iOS, but after some profiling, it seems that the overhead of CGPath is absolutely, incredibly bad compared to JavaScript. Both fragments work at the same speed on a real iOS device, and the JavaScript code has 100x the number of operations in the Quartz2D line of code!

EDIT: here is the top time profiler in Tools:

 Running Time Self Symbol Name 6487.0ms 77.8% 6487.0 aa_render 449.0ms 5.3% 449.0 aa_intersection_event 112.0ms 1.3% 112.0 CGSColorMaskCopyARGB8888 73.0ms 0.8% 73.0 objc::DenseMap<objc_object*, unsigned long, true, objc::DenseMapInfo<objc_object*>, objc::DenseMapInfo<unsigned long> >::LookupBucketFor(objc_object* const&, std::pair<objc_object*, unsigned long>*&) const 69.0ms 0.8% 69.0 CGSFillDRAM8by1 66.0ms 0.7% 66.0 ml_set_interrupts_enabled 46.0ms 0.5% 46.0 objc_msgSend 42.0ms 0.5% 42.0 floor 29.0ms 0.3% 29.0 aa_ael_insert 

I understand that this should be much faster in iOS, simply because the code is native ... So, you know:

  • ... what am I doing wrong here?
  • ... and if there is another, better solution for drawing many lines in real time?

Thanks a lot!

+6
source share
2 answers

As you described to the question, using OpenGL is the right solution. Theoretically, you can emulate all kinds of graphic drawing using OpenGL, but you need to implement the entire form algorithm yourself. For example, you need to stretch the edge corners of the lines yourself. OpenGL has no concept of lines. The line drawing is a kind of utility and is almost used only for debugging. You should consider everything as a set of triangles.

I believe that for most drawings, 16-bit floats are enough. If you use coordinates with large numbers, consider dividing the space into several sectors to reduce the coordinate numbers. The accuracy of the floats becomes poor when it goes very large or very small.

Update

I think you will soon encounter this problem if you try to display UIKit above the OpenGL display. Unfortunately, I could not find a solution yet.

0
source

You killed the performance of CGPath using CGContextAddPath.

Apple explicitly states that this will work slowly - if you want it to work fast, you need to attach your CGPath objects to CAShapeLayer instances.

You perform a dynamic graphic drawing - block all Apple performance optimizations. Try switching to CALayer - especially CAShapeLayer - and you will see that performance improves by a large amount.

(NB: in CG rendering, there are other performance errors that can affect this use case, for example, obscure default settings in CG / Quartz / CA, but ... you need to get rid of the bottleneck in CGContextAddPath first)

0
source

Source: https://habr.com/ru/post/920366/


All Articles