On Day33 of Handmade hero at this timestamp: https://youtu.be/iHSAOSYOt9E?list=PLnuhp3Xd9PYTt6svyQPyRO_AAuMWGxPzU&t=832
Casey Muratori explains here what would happen if we tried to store where stuff was in the world in 32-bit values. I did not understand this explanation. He references that he said something about this before but I don't know in which video (I admit that I skip the Q&A each day). I'm hoping someone would be kind enough to explain this here:
Why do we lose the first 8 bits of the 32bit value? He says something about there being an exponent and a mantissa, but I don't understand why that would remove the 8 bits (assuming the mantissa part is unusable, why is it unusable)?
He says we need another 8 bits to store color and that it will be required to tell how far we are "through that pixel". I don't get this at all. What does distance have to do with color and won't we need 3 channels for color, each of which is going to be 8 bits? Also, why would we talk about color when talking about location (im guessing its something about anti-aliasing which I know nothing about). I also don't know what "the full color range of blending" or "alpha blending" means.
I realize I may be asking something that warrants a much bigger explanation than is possible in a forum post. So, if you can point me to episodes I should watch (I have skipped all Q&As) or some other resource I can read I can go through them.