I'm trying to implement a game loop like Casey's, i.e. a variable fixed timestep, in SDL2. Currently I do something along the lines of:
1
2
3
4
5
6
7
8
9
10
11
12 | SDL_DisplayMode display_mode = {0};
int display_index = SDL_GetWindowDisplayIndex(window);
SDL_GetCurrentDisplayMode(display_index, &display_mode);
u32 refresh_rate = (display_mode.refresh_rate == 0) ? 60 : display_mode.refresh_rate;
if (SDL_GL_SetSwapInterval(-1) < 0) {
if (SDL_GL_SetSwapInterval(1) < 0) {
}
}
r32 frame_dt = 1 / refresh_rate;
|
The following situations I have considered:
- Refresh rate is a fixed 60Hz. opengl has a reasonable time of 16ms to display buffer
- Refresh rate is a fixed 144Hz or 240Hz. 6.9ms or even 4.2ms seems too much of a strain for the average gpu. Is this right? Overcome this with a manual 60fps loop. Will this solution cause tearing?
- Refresh rate is variable or cannot be determined. Overcome this with a manual 60fps loop. Will this solution cause tearing?
In situations 2 or 3, the solution I can think of involves using
According to SDL2, the granularity of this should be assumed to be 10ms. However, it seems likely to me that the situation may arise where we only have to wait say 5ms, i.e. a time below 10ms and will obviously overshoot this. So, it seems the way around this is to do cpu churning:
This is obviously not efficient code, yet I can only think of writing platform specific code to set the granularity of the os scheduler as an alternative. Any suggestions and answers to my questions would be most appreciated.
Thank you!