I want to loop through a tilemap and check each tile's surrounding tiles to get the correct tile. For example, I may want to know if a tile passes this rule:
Are there any patterns/techniques to do this quickly and efficiently?
I want to loop through a tilemap and check each tile's surrounding tiles to get the correct tile. For example, I may want to know if a tile passes this rule:
Are there any patterns/techniques to do this quickly and efficiently?
Are you asking how to look around tile in 2d grid?
// your "current tile" coordinates int x = ...; int y = ...; for (int dx=-1; dx<=+1; dx++) { for (int dy=-1; dy<=+1; dy++) { // ignore "current tile" itself if (dx==0 && dy==0) continue; // check tile[ty,tx] int tx = x+dx; int ty = y+dy; // ... } }
EDIT: oh, or do you want to check if specific neighbors are filled or not? If you can represent each tile with few bits, for example 1 bits (0=empty, 1=filled), then you could generate integer to compare.
// your "current tile" coordinates int x = ...; int y = ...; int pattern = 0; int i = 0; for (int dx=-1; dx<=+1; dx++) { for (int dy=-1; dy<=+1; dy++) { if (dx==0 && dy==0) continue; int tx = x+dx; int ty = y+dy; // assumes isempty() returns 1 if tile is empty or 0 otherwise pattern |= isempty(tx,ty) << (i++); } } if (pattern & 0b11110101 == 0b11010100) { // pattern matches ... }
This way you can calculate pattern for grid once, and compare with multiple choices.
0b11110101 is pattern representing for which tiles to look at, and 0b11010100 pattern represents in which state they should be when bit was 1 in previous value.
You can encode multiple states with more bits per tile. If you use 64-bit integers, then you can have 64/8 = up to 8 bits per tile, which is 256 choices.
Thanks! I couldn't ever think that I can encode it into a bitset. Until now, I always need to loop through the tilemap multiple times to check for each rule :). But why do you skip the loop when dx or dy equals zero? You should only skip if both dx and dy are equal to zero, right?
One thing that I usually do, is doing this for the entire tilemap. I don't want to check for a specific tile, I want to get all the tiles that have the pattern. So it means that I need to have an integer for each tile, and need to check the same tile if it's filled or not multiple times, one for each "current tile" (every tile in the tilemap will be "current tile" at some point in the loop). Is there any way to optimize this?
If you keep your tilemap tile types as separate byte array (so 8-bit per tile) then this "pattern" integer preparation is just 3 memory loads + some simple arithmetic. Which is relatively cheap, so simply do that for every position from scratch.
int x=..., y=...; uint8_t* tile = ...; // tilemap bytes packed row by row uint32_t row0 = *(uint32_t*)&tile[(y+0)*width+x]; uint32_t row1 = *(uint32_t*)&tile[(y+1)*width+x]; uint32_t row2 = *(uint32_t*)&tile[(y+2)*width+x]; row0 = row0 & 0x00ffffff; row1 = ((row1 & 0x00ff00ff) * 0x00010100) >> 16; row2 = row2 & 0x00ffffff; uint64_t pattern = (uint64_t)row0 | ((uint64_t)row1 << 24) | ((uint64_t)row2 << 40);
(assumes you have an extra "dummy" byte at end of last row in tile array, as it will read 4 bytes when it needs last 3 bytes on last row)
If you're looping over all positions in your tilemap, then maybe it makes sense to load row0/row1/row2 variables as 64-bit int's, and then use this pattern preparation multiple times (8-3+1=6 times) before reloading next 8 bytes for each row. This way it would give you 6x less memory ops per pattern preparation.
And if you have only 1-bit per tile just for empty or not and you store them 64-bit int packed, then you can load 64-bit's per each row at once, and then do pattern construction similar construction for 64-3+1=62 columns with just these 3 memory loads. Basically load three 64-bit ints, and do shifts/masks like above to construct 62 patterns for 62 columns, do your pattern comparisons, and then move to next 62 columns.
I don't understand the line row1 = ((row1 & 0x00ff00ff) * 0x00010100) >> 16;
Also, the row2 should be left-shifted by 48, not 40, right?
Is the "dummy" byte the first or the last 2 numbers?
Can you explain again why should I use 64 bit and how to construct it? In my game, A tile can only be 1 in 2 states: filled or empty. But the comparison pattern can have 3 states: filled, empty, and whatever (meaning I don't care if it's filled or not). Finally, I only need to check for a maximum of 3 tiles for every direction, including the diagonal.
1.
that line transforms 0xaabbccdd value into 0x000bbdd - because you need only left (0xbb) and right (0xdd) tile around center (0xcc) tile. The 0xaa and 0xcc values areuseless.
It uses common trick to "bitpack" needed bits (0xdd and 0xbb) on top of uint32 and then downshift to lowest 16-bit. But you don't really need to do this if you do only bits, as 9 bits fits comfortably in integer, no need to do extra works just to avoid middle tile.
No, 40 should be 40. row0 has 3 tiles - so 24 bits. row1 has 2 tiles - so 16 bits. 24+16=40 bits are used. So row2 must be upshifted by 40 bits.
2.
dummy byte is when you read uint32 from memory in upper byte. You read 0xaabbccdd - but you want to use only 0xbbccdd (because little endian), so top byte will always be read from memory, because there are no 24-bit loads from memory. This means in your 2d tile bye array you need one extra byte at end of last row. But again, not relevant if you do only 1-bit types.
3.
So just for 1-bit tile types it would be something like this :
uint64_t* tiles = ...; size_t pitch = (width+63)/64; for (int y=0; y<height-3; y++) { uint64_t* row = tiles + y*pitch; for (int x=0; x<width-3; x+=62) { uint64_t row0 = row[0*pitch]; uint64_t row1 = row[1*pitch]; uint64_t row2 = row[2*pitch]; for (int n=0; n<62 && x+n<width-3; n++) { uint32_t r0 = row0 & 0b111; uint32_t r1 = row1 & 0b111; uint32_t r2 = row2 & 0b111; uint32_t pattern = r0 | (r1 << 3) | (r2 << 6); // ... here do your pattern comparisons // coordinates for center tile will be (x+n+1, y+1) row0 >>= 1; row1 >>= 1; row2 >>= 1; } } }
Have not tested, you need to verify & debug that to make sure it's correct.
It assumes each row is stored in uint64_t array - if it is not multiple of 64, then just leave last bits empty or whatever. But start next row on new uint64_t.
And it will extract middle tile too - all 9 bits, so make sure you ignore that in your pattern comparisons. Technically you can skip that tile to extract only 8-bits, if you really want to store your patterns as 8-bite values - as that nicely fits into byte, so would save space if you store a lot of pattern comparsions.
Thanks, I understand it now. This's very useful and will help me in the future (especially the "bitpack" part). Can you talk about the multiplication with 0x00010100? How does multiplication between 2 bitset work?
Can I hijack this? :)
I was thinking in something very similar because I'm really far from where Casey does that in final form in the series. He talks about ditching tilemaps at some point)
So basically the simplest possible:
/// Preliminary thingy w/ a way of storing whether the /// tile is occupied or not /// 64x64 Array of bits class Bit_Tilemap { uint64_t data[64]; public: /// yada yada int get(int x, int y) const { return (data[y] >> (63 - x)) & 1ULL; } };
All cool, but then the trade off is each property of whatever stuff you want to put in your game will demand its own Bit_Tilemap to represent presence/absence.
Ok cool if you only want this to be your heights map (and then this is parsed w/ whatever world granularity you want between points, right?). But how to scale this for many properties?
It's easy to see this scales well for many objects, but am I missing something?