Handmade Hero»Forums»Code
vbo
Vadim Borodin
20 posts
OpenGL texture pixel format and endianness
Edited by Vadim Borodin on
Hey guys!

I am trying to roll my own platform layer implementation (just for fun) and I am struggling to understand a correlation between endianness concept and OpenGL texture pixel format and type.

Basically I have pixels in my bitmap defined like Casey does:

1
uint32_t pixel = (RR << 16) | (GG << 8) | BB;

If I understand correctly what it means is that depending on the endianness of user machine I am getting different layout of color components in memory:

1
2
3
Logical:       0xXXRRGGBB
Big Endian:    0xXXRRGGBB
Little Endian: 0xBBGGRRXX

What I want is to upload this pixels to OpenGL texture so I can "blit" pixels to the screen using fullscreen textured quad. I am using glTexImage2D function for this purpose like so:

1
2
3
4
5
glTexImage2D(
    GL_TEXTURE_2D, 0, GL_RGBA, fb.width, fb.height, 0,
    GL_BGRA, // format
    GL_UNSIGNED_INT_8_8_8_8_REV, // type
    NULL);

I am using GL_BGRA format because it looks like reversed ARGB, which is my logical pixel representation and I am using this REV to specify that it's really reversed. I assume OpenGL should read my pixel data byte-by-byte treating each byte as a color component in the order of ARGB.

And all of this actually works fine on my Little Endian machine but I want to be sure that it also will work without changes on Big Endian. I found GL_UNPACK_SWAP_BYTES flag that according to documentation has something to do with endianness but I am not sure. Could somebody clarify what it does exactly? Documentation says that "This will cause OpenGL to perform byte swapping from the platform's native endian order to the order expected by OpenGL" but what order it expects?

Maybe I just need to check for endianness in runtime and swap bytes manually? Or OpenGL already does this automagically somehow?

Could someone also explain the difference between GL_UNSIGNED_INT_8_8_8_8 and GL_UNSIGNED_BYTE types? For me both of this work exactly the same.
Mārtiņš Možeiko
2559 posts / 2 projects
OpenGL texture pixel format and endianness
Edited by Mārtiņš Možeiko on
Here is good answer from Stack Overflow: http://stackoverflow.com/a/7786405
Basically GL_UNSIGNED_INT_8_8_8_8 is independent from endianess and with _REV you can reverse it meaning of it. This is what you want to use in your case where you prepare pixel values with shifting (which is endian independent).

But GL_UNSIGNED_BYTE format is endian dependent. OpenGL will expect bytes to be in specific order in memory. If you would write pixel values as bytes to memory, then you would want to use GL_UNSIGNED_BYTE. Like this:
1
2
3
4
5
6
7
uint8_t* bytes = ...;
bytes[0] = BB;
bytes[1] = GG;
bytes[2] = RR;
bytes[3] = AA;
bytes += 4;
...


Another option would be to not worry about endianess at all :) Most popular modern CPU's are little endian.
vbo
Vadim Borodin
20 posts
OpenGL texture pixel format and endianness
I saw this question (I actually googled for an hour before posting this). I just don't understant the answer fully.

Please clarify a bit about "GL_BGRA are endian dependent". I am actually using format=GL_BGRA in my code. Do you mean it would be endianness-dependent if I use something like format=GL_BGRA and type=GL_UNSIGNED_BYTE?
Mārtiņš Možeiko
2559 posts / 2 projects
OpenGL texture pixel format and endianness
Edited by Mārtiņš Možeiko on
Oh, sorry. I meant GL_UNSIGNED_BYTE instead of GL_BGRA in sentence with "endian dependent". I edited my post.

If you are passing GL_BGRA as format to OpenGL you are saying that you will be providing B, G, R and A color components. This doesn't matter for endianees. It's just an information to OpenGL what color components are you passing.

Second argument - GL_UNSIGNED_BYTE or GL_UNSIGNED_INT_8_8_8_8 says in what format they are packed in memory.

GL_UNSIGNED_BYTE means they are put as bytes in memory. So on any endian machine it will always be BGRABGRA.... And RGBARGBA.. for GL_RGBA format. That's because bytes don't have endianness. If you interpret these values as integers, they obviously are endian dependent.

GL_UNSIGNED_INT_8_8_8_8 says that you are storing colors as 32-bit integers. Which obviously have endianess. If you are preparing colors as 32-bit integers then you don't care about endianess, because integers will have same endianess as machine has (sounds silly if I say it like this :). So you must use GL_UNSIGNED_INT_8_8_8_8 in your case. In some situations maybe you are getting color in 32-bit integer in reversed byte order. In such case you will need to use GL_UNSIGNED_INT_8_8_8_8_REV.
Casey Muratori
801 posts / 1 project
Casey Muratori is a programmer at Molly Rocket on the game 1935 and is the host of the educational programming series Handmade Hero.
OpenGL texture pixel format and endianness
Edited by Casey Muratori on
Just to clarify:

If a register reads 0x01234567, and you write that to memory location 0, your memory looks like

1
2
3
4
5
            BIG ENDIAN   LITTLE ENDIAN
Byte 0:         01            67
Byte 1:         23            45
Byte 2:         45            23
Byte 3:         67            01


OpenGL specifies its "ARGB" and "BGRA" as being the memory order of the values, not the register order. So whatever your values are stored as in memory, that is what it will use.

If you are constructing the pixels in a register like you described here:

1
uint32_t pixel = (RR << 16) | (GG << 8) | BB;


Then when you write that to memory on a little endian machine, you will get BB then GG then RR then 00 sequentially in memory. If you write that to memory on a big endian machine, you will get 00 then RR then GG then BB. So you would definitely have to do a swap (either manually or with OpenGL flags as described above) to make sure it was submitted properly, unless there is a "GL_ARGB" texture format (I don't remember if that exists).

- Casey
Mārtiņš Možeiko
2559 posts / 2 projects
OpenGL texture pixel format and endianness
Edited by Mārtiņš Možeiko on
You don't need to swap. You can pass GL_UNSIGNED_INT_8_8_8_8 as "type" argument and OpenGL driver will interpret each pixel as uint32_t. So it will automatically use correct endianess.

Using GL_UNSIGNED_BYTE will always interpret bytes same in litte/big endian. They are bytes.
Using GL_UNSINGED_INT_8_8_8_8 will always interpret bytes as uint32 in native endianness.
Using GL_UNSINGED_INT_8_8_8_8_REV will always interpret bytes as uint32 in reversed endianness.
vbo
Vadim Borodin
20 posts
OpenGL texture pixel format and endianness
If I understand both of you correctly you have different opinions on how OpenGL reads memory when it UNPACKs it from client representation to GPU representation (which is GPU-dependent and we don't need to think about it).

From spec (https://www.opengl.org/registry/doc/glspec44.core.pdf, section 8.4.4.1) it looks like there are several possibilities depending on "format" and "type" specifier:

Data are taken from the currently bound pixel unpack buffer or client memory as a
sequence of signed or unsigned bytes (GL data types byte and ubyte), signed or
unsigned short integers (GL data types short and ushort), signed or unsigned
integers (GL data types int and uint), or floating-point values (GL data types
half and float).

So it can read as bytes or it can read as integers depending on circumstances. Definitely if we specify type=GL_UNSIGNED_BYTE it will read components as a sequence of bytes so we need to somehow specify this bytes in the right order. But when we specify type=GL_UNSINGED_INT_8_8_8_8 it says that the whole thing will be read as unsigned integer.

The question is: when OpenGL read something from us as unsigned integer does it use the same endiannes as the client code? Or it just read them in Big Endian all of the time? Specification is a bit unclear on that. The only interesting sentence is this:

By default the values of each GL data type are interpreted as they would be
specified in the language of the client’s GL binding. If UNPACK_SWAP_BYTES is
enabled, however, then the values are interpreted with the bit orderings modified
as per table 8.4.

For me all of this is still unclear but maybe it helps somebody.
Mārtiņš Možeiko
2559 posts / 2 projects
OpenGL texture pixel format and endianness
vbo
The question is: when OpenGL read something from us as unsigned integer does it use the same endiannes as the client code

Yes, in this case OpenGL uses same endianess as client code.
vbo
Vadim Borodin
20 posts
OpenGL texture pixel format and endianness
Edited by Vadim Borodin on
mmozeiko, looks like you are very sure about this =) Thank you for your clarifications.

Just another minor thing: as I see there are two ways of generating bitmap in our platform-independent code. One that treats each pixel like integer and uses bitwise operations on it (like Casey's original WeirdGradient) and second that works with bytes/components as isolated uint8_t values (like mmozeiko's pixel[0] = BB...).

When to use one or another option? What is the fastest way to do it and what is a more flexible? What way is better if we commonly not generating colors on the fly but read them from existing image-files - PNG, BMP etc?
Mārtiņš Možeiko
2559 posts / 2 projects
OpenGL texture pixel format and endianness
It doesn't really matter. Whatever works best for you :)

For simple code or testing/debugging you want to stick with separate byte values for red, green, blue channels. That way it is easier to debug or tweak stuff.

For high performance code you will want to use SIMD instructions (SSE2/Neon) to process pixels in batches. Typically four 32-bit integers. Most likely then you'll write algorithm to operate on specific byte order and ignore how compiler stores integers in memory, because you'll be storing sse/neon registers which will have rgba values in order you want.
vbo
Vadim Borodin
20 posts
OpenGL texture pixel format and endianness
mmozeiko, could you please provide some good links about all of this SIMD stuff?
Mārtiņš Možeiko
2559 posts / 2 projects
OpenGL texture pixel format and endianness
Sorry, I'm not sure what to recommend to learn SSE. I'm pretty sure Casey will go into SSE2 topic when he'll explain how to write proper renderer.

I learned SSE from Intel manuals, but that is not the best way to learn it: http://www.intel.com/content/www/...s-software-developer-manuals.html It has all the instructions and meaning of them.
Another good link mentioned on Twitch chat is Intel Intrinsiucs Guide: https://software.intel.com/sites/landingpage/IntrinsicsGuide/ It list intrinsics you can use in C code, what assembly instruction corresponds to each, and small pseudocode that explains what calculation is performed.

Here is some blog post with a bit of introduction to SSE: http://felix.abecassis.me/2011/09/cpp-getting-started-with-sse/
You can also take alook at some opensource how to perform operations efficently. For example, converting from RGBA to BGRA: https://chromium.googlesource.com...renderer/d3d/loadimageSSE2.cpp#72

Maybe somebody else can post good links about this subject.
vbo
Vadim Borodin
20 posts
OpenGL texture pixel format and endianness
mmozeiko, thanks a lot. Sounds like a bit too hardcore for me right now though =)