I am working on an iOS application that renders data as a line graph. The graph is drawn as CGPath in a full-screen user-defined UIView and contains no more than 320 data points. The data is often updated, and the schedule needs to be redrawn accordingly - the refresh rate of 10 / sec will be pleasant.
So far so easy. It seems, however, that my approach takes a lot of CPU time. Updating a graph with 320 segments at 10 times per second results in 45% processor utilization for the process on the iPhone 4S.
Perhaps I underestimate the graphics work under the hood, but for me there is a lot of CPU load.
Below is my drawRect() function, which is called every time a new dataset is ready to work. N contains the number of points, and points the CGPoint* vector with coordinates for drawing.
- (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext();
I tried passing the path to the standalone CGContext before adding it to the current layer, as suggested here , but without any positive result. I also looked for an approach to CALayer directly, but that didn't matter either.
Any suggestions for improving performance for this task? Or is the rendering just more for the processor, which I understand? Will OpenGL make sense / difference?
Thanks / Andi
Update: I also tried using UIBezierPath instead of CGPath . This post here gives a nice explanation of why this did not help. Fine-tuning CGContextSetMiterLimit et al. also did not bring much relief.
Update # 2: I ended up switching to OpenGL. It was a steep and frustrating learning curve, but productivity gains are simply unbelievable. However, CoreGraphics smoothing algorithms do a nicer job than what can be achieved with 4x multisampling in OpenGL.
ios objective-c core-graphics cgpath
Andi Wagner Jan 03 '12 at 16:57 2012-01-03 16:57
source share