Hey guys!
I am trying to roll my own platform layer implementation (just for fun) and I am struggling to understand a correlation between endianness concept and OpenGL texture pixel format and type.
Basically I have pixels in my bitmap defined like Casey does:
| uint32_t pixel = (RR << 16) | (GG << 8) | BB;
|
If I understand correctly what it means is that depending on the endianness of user machine I am getting different layout of color components in memory:
| Logical: 0xXXRRGGBB
Big Endian: 0xXXRRGGBB
Little Endian: 0xBBGGRRXX
|
What I want is to upload this pixels to OpenGL texture so I can "blit" pixels to the screen using fullscreen textured quad. I am using glTexImage2D function for this purpose like so:
| glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA, fb.width, fb.height, 0,
GL_BGRA, // format
GL_UNSIGNED_INT_8_8_8_8_REV, // type
NULL);
|
I am using GL_BGRA format because it looks like reversed ARGB, which is my logical pixel representation and I am using this REV to specify that it's really reversed. I assume OpenGL should read my pixel data byte-by-byte treating each byte as a color component in the order of ARGB.
And all of this actually works fine on my Little Endian machine but I want to be sure that it also will work without changes on Big Endian. I found GL_UNPACK_SWAP_BYTES flag that according to documentation has something to do with endianness but I am not sure. Could somebody clarify what it does exactly? Documentation says that "This will cause OpenGL to perform byte swapping from the platform's native endian order to the order expected by OpenGL" but what order it expects?
Maybe I just need to check for endianness in runtime and swap bytes manually? Or OpenGL already does this automagically somehow?
Could someone also explain the difference between GL_UNSIGNED_INT_8_8_8_8 and GL_UNSIGNED_BYTE types? For me both of this work exactly the same.