Pointer Size Casting

Good evening,

Hopefully a quick and easy question. In reviewing the HMH videos, I've noticed some basic pointer size casting (hopefully the correct terminology) that has always thrown me off a bit in my hobby coding endeavors.

One example from day 36 is the following (where ReadResult.Contents is a void*):
1
uint32* Pixels = (uint32*)((uint8*)ReadResult.Contents + Header->BitmapOffset);


So I believe I understand correctly that the overall cast (uint32*) is done to keep everything in 4-byte chunks for pointer arithmetic that would advance a 4-byte stored pixel (AA RR GG BB).

Where I get lost now is the need for the initial (uint8*) cast. Especially since Header->BitmapOffset is already a (uint32) value. I've attempted to change it to the following, but doesn't work:
1
uint32* Pixels = ((uint32*)ReadResult.Contents + Header->BitmapOffset);


Any help understanding this would be greatly appreciated.

-Mebourne
When you add an int to a pointer the int is automatically multiplied with the sizeof the type the pointer points to.

That means that

1
uint32* Pixels = ((uint32*)ReadResult.Contents + Header->BitmapOffset);


actually is

1
uint32* Pixels = (uint32*)((char*)ReadResult.Contents + sizeof(uint32) * Header->BitmapOffset);


(note that sizeof(char) is always 1)
Pointer arithmetic works on the size of type pointed (not the size of the pointer). Meaning if you add 1 to a uint32 pointer, the pointer moves 4 bytes ahead because sizeof( uint32 ) is 4. If you add 1 to a uint16 pointer, it moves 2 bytes ahead.
1
2
3
uint32* a;
a += 1; // Moves 'a' 4 bytes ( 1 * sizeof( uint32 ) ). If 'a' contains address 0x0000 it becomes 0x0004;
a += 3; // Moves 'a' 12 bytes ( 3 * sizeof( uint32 ) ). If 'a' contains address 0x0004 it becomes 0x0010;

If you cast the pointer to a type that as a different size it affects the '+' operation. If you cast it to a uint8* the type pointed is now 1 bytes long and so the '+' moves only one byte.
1
2
3
4
uint8* b = ( uint8* ) a + 1; // If 'a' was 0x0010, 'b' will be 0x0011 ( 0x0010 + sizeof( uint8 ) ).
// Note that the cast happens before the '+', if it didn't 'b' would contains 0x0014.
// You can use parenthesis to make sure of the order :
// uint8* b = (( uint8* ) a) + 1;

In your example we want to treat the result as 4 bytes values ( uint32 ), but the offset in memory ( BitmapOffset ) is in bytes. So we need to first cast the content value from void (void doesn't have a size I believe) to uint8, do the addition to move at the right place in memory, and then cast the resulting pointer to a uint32 to treat later operations as 4 bytes.


Edited by Simon Anciaux on Reason: Typos
Make sure you understand that these two expressions are equal in C:
1
2
3
4
a[5]
*(a+5)
*(5+a)
5[a]

That means that for following arrays a and b, the next two expressions are different:
1
2
3
4
u32 x[10];
u8 y[10];
(x + 5) // same as &x[5]
(y + 5) // same as &y[5] 

If you look at &x[5] and &y[5] expressions then it is obvious that x[5] calculates offset 20 (sizeof(u32)*5) and y[5] calculates offset 5.
That means that (x + 5) calculates different offset than (y + 5).

Casting x type to byte array would be this expression: ((u8*)x + 5) And that would be same as writing ((u8*)x)[5] which means take byte with offset 5 from beginning of x. That's why simply changing type and leaving offset part the same wouldn't work in your question. Otherwise there would be no difference between u32 and u8 types.

Edited by Mārtiņš Možeiko on
All thank you for the quick replies and detailed responses, it is much appreciated. It's clear now that I was incorrectly thinking in terms that Offset was already stored in a 32bit value and that lined up properly.

-Mebourne