Thinking about frame update

I've just watched day 18 and what made me think was the way Casey thinks about updating each frame. So I've observed there are perhaps two schools of thinking about frame updates.

First - Casey's:
You're at time T when the frame switches. During frame which spans from T to T + 1 you're calculating what happens in game at time T to T + 1, so essentially you're trying to predict what the game state will be when the next frame switch happens. The problem I see with this way of thinking is as Casey said many times, it doesn't handle variable framerate at all, because you're PREDICTING what the state will be at time T + 1, so you cannot possibly account for variable framerate.

Second - (This is how I've been thinking about updating frames and seems to me almost every tutorial and article is):
You're at time T and you're calculating what happened to game state since time T - 1. The ugly thing about this is you're displaying state of game from time T at time T + 1. I think that this isn't real problem though, because it doesn't matter that you're offset by one frame from game time, if you're consistent with it. Benefit of this approach is, you're handling variable frame rate well, cause you always know how much has game time advanced since last frame switch.

I'd love to know what you think about this, if this is even a valid discussion and if so, what school of thought do you prefer?

Edited by Jan Ivanecky on
I think the second way is totally valid and I wouldn't discourage people from using it if they feel more comfortable with it, but I still think it is "wrong" unless I have always been missing the secret clue to how to make it work properly :)

Basically my problem with it is I don't see how to scale it to handle temporal effects like motion blur. In order to have a frame that is on the screen for n seconds have n seconds of motion blur in it, you have to know n _before_ you render it. But with the second method you describe, you have no idea what n is going to be because n is variable. Using the previous frame's n to compute the amount of blur but then showing the frame for however long it takes to render the next frame simply does not produce a series of frames whose on-screen time lines up with the temporal effects rendered into them. So what you will end up with is the amount of temporal effect in each frame being based on the _previous_ frames timing, which is always wrong.

Stated another way, there is an element of real time here, which is that the frame _is actually on the screen for an amount of real time_. So as soon as you start rendering something that is supposed to work in real time, like temporal effects are, then the back-dating trick no longer works, because you are misaligned with an actual real-time effect.

Does that make sense?

I know it is tricky to think about, I get confused myself sometimes :) But unless I have always been missing something, it seems to me that if you really want something that converges to the correct image at the limit, you need to either have a fixed frame rate or a way to accurately predict how long the _next_ frame will be on the screen.

- Casey

Edited by Casey Muratori on
Thanks for answer Casey. That makes perfect sense to me :)
When I think about it, are both approaches equal when considering enforced fixed framerate? I'd say in that case they're identical code-wise, aren't they?
As far as I can tell, both methods only work well with stable frame rates. Method 1 implicitly requires a stable frame rate, method 2 can estimate the current frame time from past experience. In the case of unstable frame rate, method 1 will have to scale down quality or render frequency to get back to the target frame rate (unless Casey still has some magic up his wristguards), while method 2's estimates will be slightly off, possibly diminishing quality (or, in case of careless programming, affecting gameplay logic negatively).

I wonder if a hybrid approach would work. Predict the next frame like Casey, but make the target frame rate slowly creep up until the actual frame rate barely keeps up. If the actual frame rate drops noticeably, quickly drop the target frame rate as well, restarting the slow build up. If the actual frame rate stays below a minimum (say, 30 FPS), start dropping the quality.
Yes, to be clear, I don't think there's anything wrong with varying the frame rate using the fancy new monitors so long as you are still trying to predict the frame timing. So if you were going to do it this way, instead of just using the last frame time, I'd try to use some kind of rolling maximum or do something statistical to try to make sure you weren't going to keep being subtly wrong all the time - basically try to give yourself some padding.

- Casey
EdKeens
When I think about it, are both approaches equal when considering enforced fixed framerate? I'd say in that case they're identical code-wise, aren't they?

Really the code for the game is identical either way. It's really just the conceptualization that changes. The place where the code differs is actually just in the platform layer's handling of the frame rate, ie., whether it tries to pick frame rates that it can hit, or whether it just always uses the last frame time. So the only practical difference is in how you work on the "frame rate decision code", or whatever you want to call that...

- Casey
I was wondering why don't you use the dt and accumulator method here in this blog post: http://gafferongames.com/game-physics/fix-your-timestep/

The idea is to completely decouple state update from rendering, and use velocity information from the game state in the renderer to interpolate positions forward in time to the correct (enough) position.

Variable framerate, fixed state update frequency, works fine with higher and lower framerates as long as the cpu can update the state at your specified dt rate. All you need to do is the position interpolation from 0 to 1 dt in the renderer based on the alpha value for temporal aliasing. I'm assuming this would also be the method you'd use to do variable framerate monitor updates without judder.

Edited by wasd on
I do not like decoupled methods because I find they lead to (sometimes much) slower code. Being able to update and render in one step can make a substantial difference in terms of cache utilization and memory bandwidth, especially when you are dealing with lots of objects.

- Casey
Cache performance is a good point. I wonder how much it will actually matter, though, especially in a 2d game. Almost every game out there supports variable framerates though, so they must be doing something that performs well enough to allow that flexibility. I suppose it does depend heavily on what minimum hardware you want to target and the game design itself.

Edited by wasd on