Mysterious performance degrades with Open GL + infinite loop

I am working on an emulator as a side project, but I am having performance issues and cannot figure out where they come from.

The application mainly consists of GLKView for display and a separate thread with an infinite loop for emulating the processor. Here's a sample with the actual emulation code that still displays the problem:

@implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; GLKView *glView = [[GLKView alloc] initWithFrame:self.view.bounds]; glView.delegate = self; glView.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; [EAGLContext setCurrentContext:glView.context]; [self.view addSubview:glView]; glView.enableSetNeedsDisplay = NO; CADisplayLink* displayLink = [CADisplayLink displayLinkWithTarget:glView selector:@selector(display)]; [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; dispatch_get_main_queue(), ^{ dispatch_async(dispatch_queue_create("yeah", DISPATCH_QUEUE_SERIAL), ^{ CFTimeInterval lastTime = 0; CFTimeInterval time = 0; int instructions = 0; while(1) { // here be cpu emulation if (lastTime == 0) { lastTime = CACurrentMediaTime(); } else { CFTimeInterval newTime = CACurrentMediaTime(); time += newTime - lastTime; lastTime = newTime; } if (++instructions == 1000) { printf("%f\n", 1/(time * 1000)); time = 0; instructions = 0; } } }); } - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.0, 0.0, 0.0, 1.0); glClear(GL_COLOR_BUFFER_BIT); // Here be graphics } @end 

Similarly, endless loops basically just count the iterations and print its frequency in MHz.

So, the problem is that when the application starts, the cycle runs at about 9-15 MHz (on iPhone6), and if I look at the GPU report in Xcode Debug Navigator, I see that the processor frame time is 0.2 ms Then, after start in a few seconds, the cycle drops to 1-5 MHz, and the processor frame time increases to 0.6 ms

If I turn off GLKView updates, the loops will never get slower

I also tried using various streaming APIs (gdc, NSThread, pthread), but this does not seem to have any effect.

My question is: am I doing something wrong? Is this just the case when GLKView is not fully initialized for a couple of seconds, and therefore, using less processor than usual, and get speed acceleration? Any other ways that I could structure my code to get maximum performance in a loop?

Update I did some more tests and noticed that the problem also occurs when using CAEAGLLayer instead of GLKView, also that this does not happen on the simulator only on the device. I also tried using an OS X application with NSOpenGLView and this will not happen ...

Update 2 I tried to start the stream after a while, and not immediately, and if the delay is longer than the time it usually takes for a drop, the stream starts to slow down ... I'm not quite sure what to do with it ...

Metal update I tried to use Metal instead of OpenGL, using a simple template template from Xcode, and this also happens to it ...

+5
source share
2 answers

CPU frequency can be reduced by the operating system to consume less power / save battery. If your thread does not use a lot of processor power, then the OS will think that it is time to lower the frequency. On the other hand, there are many other threads / processes on the desktop computer (and the thresholds are probably very different), perhaps why it works in a simulator / desktop application.

There are several possible reasons why your thread is defined as not consuming much CPU time. One of them is that you call printf, and there is probably some kind of lock inside that makes your thread wait (maybe CACurrentMediaTime). The other is probably related to GLKView updates, although I'm not sure how to do this.

+2
source

So, I still don't understand why this is happening, but I managed to find a workaround using CALayer supported by CGBitmapContext, instead of using OpenGL, inspired by https://github.com/lmmenge/MeSNEmu .

 @interface GraphicLayer : CALayer { CGContextRef _context; } @end @implementation GraphicLayer -(id)init { self = [super init]; if (self) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); _context = CGBitmapContextCreate(NULL, 418, 263, 8, 418 * 4, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast); CFRelease(colorSpace); } return self; } - (void)display { CGImageRef CGImage = CGBitmapContextCreateImage(_context); self.contents = (__bridge id)(CGImage); CGImageRelease(CGImage); } @end @interface GraphicView : UIView @end @implementation GraphicView + (Class)layerClass { return [GraphicLayer class]; } - (void)drawRect:(CGRect)rect { } @end 

Using this, loops do not slow down (either have an infinite loop, or perform a bunch of operations on each frame), but I'm not quite sure why ...

0
source

Source: https://habr.com/ru/post/1210226/


All Articles