A tile chunk is the concept of a group of tiles, in our case, a block of 16 * 16 tiles. But in practice, the tile_chunk structure is a single pointer to some memory where we store the tiles. So the size (in terms of bytes) of a tile chunk is the size of a pointer, which is 8 bytes on a 64 bit CPU.
We only allocated 128 * 128 tile_chunk, which means we allocate 128 * 128 * 8 bytes. After this allocation, no tiles are currently allocated.
When we will try to access the actual tiles of the chunk, than we will allocate memory for the tiles of that chunk, which is 16 * 16 * 4 (as a tile value is a uint32 which is 4 bytes).
So at startup we allocate 128 * 128 * 8 bytes (131 072 bytes).
When we access a tile chunk for the first time we allocate it: 16 * 16 * 4 bytes (1024 bytes).
In the sparse version:
- Any chunk that isn't used only cost 8 bytes;
- Any used chunk cost 8 + (16 * 16 * 4) bytes (tile_chunk size + the size for the tiles) or 1032 bytes.
In the non sparse version any chunk (used or not) cost 8 + (16 * 16 * 4) bytes.
Schema of the memory:
c = tile_chunk structure
t = tile data (one t is 16 * 16 * 4)
o = any data unrelated to the tiles
0 = unused memory
At startup:
|000000000000000000000|
We push the tile chunk array
|cccc00000000000000000|
Any other part of the application pushes data
|ccccooo00000000000000|
We access a chunk, triggering the allocation for the tiles
|ccccooot0000000000000|
Any other part of the application pushes data
|ccccoootoooo000000000|
We access another chunk
|ccccoootoooot00000000|
...
In the non sparse version it was:
|cccctttttttttttttttttoooooooooo000000|