How useful is it, generally, to allocate a huge, fixed-size, permanent memory pool?

In the HmH game (I watched about 100 episodes) the platform layer allocates a big chunk of permanent memory, passes it to the game layer, and then the game pushes stuff into it like assets, world data, etc. treating it as unlimited contiguous memory.

I understand that has the benefit of simplifying "hot reloading". And that if there isn't enough memory you get only one "failing point". Obviously, it's faster than having a bunch of fragmented memory. But are there any other benefits to making sub-arenas in a fixed-size memory block vs separately allocating big memory chunks for the different systems/data (sound mixer, assets, world data...)?

I guess doing the latter might be slightly slower because you call an external allocator more times, but that's probably negligible. I'm making a sandbox kind of game, and the world arena could be very big or very small depending on the chosen world size, so in this kind of situation I feel like it makes more sense to allocate a big chunk of memory when the world size is determined. So I think what makes more sense is having that kind of fixed-sized arena for the permanent stuff like assets, global state, and sound system, and another arena for world stuff. But I think even if you had an arena for each of the independent systems, it wouldn't affect the performance much; is that right?

I'm just saying this because when I saw Casey do that, it felt almost like it was an obvious decision for him. But now I feel like you must know a lot about your game to assume that, and it almost feels against the HmH ethos to do such a design assumption early on, because the benefits don't seem so big. So is there any other benefit to doing this that I can't see, or was Casey just aware of future needs of the game because he had a plan for it, or a roadmap of what to teach (like easily making the "hot reloading")?

Edited by Opoiregwfetags on Reason: Initial post

In later episode the allocator evolve and isn't a big chunk subdivided (if I remember correctly). There is no big advantage of having 1 giant allocation for everything, you can have one per system, or group them by livetime...

Remember that Handmade Hero is an example, you need to do or test what is appropriate for your game. There isn't a single right way to do thing.

Realize too that this decision was made very early on in the project and (from what I can recall since its been a while) this approach was actually the simplest approach for Casey at the time. It was simple because all he did was create a giant memory block and create a simple linear memory allocator which he could free all at once. Slight modifications were made to this scheme as he went but ultimately this structure made memory management pretty simple which allowed for smoother development. Later on, I actually think Casey implemented dynamic arenas as the need arose.

I think the biggest thing to take away from this is start with the simplest thing that works for you and work your way from there. If, while developing, you start finding that things might actually work better if you malloc a couple different arenas then there's nothing wrong with that. Your are correct that calling malloc more than once in your program is really not that big of a deal. The problem really comes when you are malloc'ing a bunch of little things every frame, which alot of programs (generally with the oop mindset) tend to do.

Thanks for assuring me on that, guys.

Yeah, it actually makes sense that Casey wanted to introduce memory management "systems" in growing complexity order, aswell as make things simple and transparent at the beginning.


Replying to boagz57 (#24984)