Hi guys, I'm trying to write a platform layer and was running into an issue that I was wondering if anyone could offer some advice.

I setup timing code for my main loop and I'm getting inconsistent frame times.

1
2
3
4
5
6
7
8
elapsed time: 18.05 ms
elapsed time: 17.19 ms
elapsed time: 15.82 ms
elapsed time: 16.34 ms
elapsed time: 17.11 ms
elapsed time: 15.33 ms
elapsed time: 18.19 ms
elapsed time: 16.33 ms


I'm using mach_absolute_time() since from what I could find, that's the highest resolution timer available.

I've got a basic loop where I poll for input, draw my graphics, and then swap the buffers. I'm using OpenGL for drawing, and have V-Sync turned on. I would expect the frames to be much more consistent, since Casey seems to get very close to 16.67ms each frame on Windows.

I've seen others use CVDisplayLink's callback function to draw on the callback, and it if I use the provided CVTimestamp that the callback takes, then it looks like I'm getting exactly 16.6ms but if I switch to timing the callback function with mach_absolute_time() it looks like I get exactly the same time of frame timing as above. So I guess my question is, is this expected or is there a way to get more consistent frame-timing on Mac OS?

I've tried timing individual parts of my code and it seems that the call to swap the buffers is what will cause my frame times to be inconsistent.