I am trying to write an application to display a video in full screen, which should be as smooth as possible.
My OpenGL code synchronizes with the vertical update synchronization, and most of the time playback is performed with low CPU usage.
However, sometimes a slight stutter occurs.
To simplify the test and eliminate potential sources of delays, I wrote a simple rendering cycle that simply creates full-screen mode, clears the frame and swaps the buffers.
I assigned a buffer replacement time (using mach_absolute_time ()) and printed the time out of range between 15 and 18 milliseconds.
The following is an example of execution (report time is in seconds, and frame time is in milliseconds):
at time:0.903594 -> frame time:06.943 ms at time:1.941287 -> frame time:20.801 ms at time:1.956124 -> frame time:14.725 ms at time:1.969766 -> frame time:13.533 ms at time:4.059608 -> frame time:23.808 ms at time:4.068953 -> frame time:09.255 ms at time:6.090000 -> frame time:55.086 ms at time:6.090681 -> frame time:00.210 ms at time:6.101372 -> frame time:10.659 ms at time:9.684669 -> frame time:18.014 ms at time:15.032867 -> frame time:18.463 ms at time:15.047580 -> frame time:14.618 ms at time:17.028749 -> frame time:65.096 ms at time:17.028962 -> frame time:00.108 ms at time:17.037022 -> frame time:08.034 ms at time:17.049193 -> frame time:12.069 ms at time:17.063416 -> frame time:14.130 ms
At the time of testing other applications, the application from Xcode was not running. Tests were performed on a Macbook Pro 5.1 running OS X 10.7.5.
Switching graphics (this laptop has Nvidia 9600 and 9400), working from an external monitor or laptop screen, does not matter.
To eliminate differences between the core API, I tried the same code using graphical frameworks such as SDL, Glfw, Cinder, and SFML. Finally, I also tried counting down the official official Apple OpenGL official screen in full screen mode. They all behave more or less the same, although Glfw seems a little more stable.
Increasing drawing thread priority to the maximum with the SCHED_FIFO policy seems to improve a little, but not much.
I am starting to think that it is impossible to get a solid 60-hour frame rate from OpenGL, or if it is not properly documented then.
Has anyone been able to get a solid 60 Hz display speed for a full-screen OpenGL application in OS X? How?
EDIT: I noticed that running tests from the terminal improves timings. Running from Xcode generates more hesitation, and I used these numbers to draw my conclusions. In any case, I still seem to get more stable behavior under Windows 7 on the same machine. But current fluctuations are within acceptable limits.