Casey, on various posts in this forum, states that if not using a fixed time step then temporal effects like Motion Blur may appear to be as if the entity teleported.
"Basically my problem with it is I don't see how to scale it to handle temporal effects like motion blur. In order to have a frame that is on the screen for n seconds have n seconds of motion blur in it, you have to know n _before_ you render it. But with the second method you describe, you have no idea what n is going to be because n is variable. Using the previous frame's n to compute the amount of blur but then showing the frame for however long it takes to render the next frame simply does not produce a series of frames whose on-screen time lines up with the temporal effects rendered into them."
So, what I get from this is that with a non-fixed time-step, we don't know how long a individual frame stays on screen (due to OS scheduling, GPU pipelining, etc), but I don't really see how this would be important. Don't we only need the start + end positions of the entity on the screen, then we can just interpolate between those positions and blur between them? Why does it matter how long it actually stays on the screen for? It seems like Casey is indicating that maybe we should blur more the longer something stays on screen for? But, I honestly don't really get why this is the case, and why motion blur, in particular, needs this trait whereas other graphical effects do not.
I think Casey, on Day 12, also likened it to a teleporting type of effect, but I don't see how that occurs. TY :)