Nice endianness macro? (+ different instruction sets per OS?)

I want to have a macro that tells me the endianness of the machine I'm compiling for, because I expect to compile for different architectures, so I can do

if LITTLE_ENDIAN{
    //...

}else{
    //...
}

in places that I do network or low-level stuff.

Obviously I want the macro to add no instructions, but with this macro that I made

static const union { 
	unsigned char bytes[4];
	uint32_t value;
} _endianMacroHelper = { { 0, 1, 2, 3 } };

#define BIG_ENDIAN (_endianMacroHelper.value == 0x00010203)
#define LITTLE_ENDIAN (_endianMacroHelper.value == 0x03020100)

even with the optimize -o2 arguments for the compiler, it generates a compare instruction in the 'if's.

So I guess the way to go is to more explicitly set the endianness macro based on some other compiler macros, or macros set by my compile options?

Another question I have is how to target multiple OS's and instruction sets. Like, AFAIK, Windows supports x86 and x64. I'm not sure about the instruction sets of Linux and Mac but I think there are multiple ones that are common. So, I assume for each OS I should have different executables for the different common instruction sets, and in my code I should treat OS-specific and instruction-set-specific stuff orthogonally. But endianness would depend on the instruction set, because each instruction set supports only one endianness? I'm making a game targetting PC only (Windows, Linux and Mac, I guess). This might be a stupid question, but what instruction sets should I support, on each OS? I guess that brings the question, what's the oldest version to support for each OS. And there might not be a clear answer, but I just don't know much about Linux and Mac...


Edited by Opoiregwfetags on

I'd suggest defining the macro using your "build system" (whatever you use). I.e.: Have explicit support for each target you care about. Then just define LITTLE_ENDIAN as 0 or 1, depending on the target's properties. This way, any compiler worth its salt will optimize out the branch.

That's just my personal preference though. I like it because it's simple and pretty much fail safe.

Another thing to consider: x86 and x86-64 are little endian. Arm apparently is bi-endian. (Don't really know how that works, but I assume the compiler chooses a default.)

Point being: You are probably fine just assuming a little endian target. But I'd ask martins about that :P

I'd also say that supporting 32-bit x86 isn't worth it. The only "relevant" 32-bit platform I know about is WASM. (Well, unless you want the game to run on a raspberry pi.)

Usually you rely on compiler/OS headers to provide you endianess macros. They are not in standard so you must use custom ones.

I don't think you can create your own macro that will evaluate to constant that you can use in #ifdefs. But you can make macro that evaluates to constant number.

Like with c99 compound literal: #define IS_BIG_ENDIAN (!*(char*)&(short){1})

Then if you use it in an if statement, then that if will disappear after optimizations.

But relaying on system headers is not a big deal. On Windows pretty much everything is little endian, unless you target some ancient custom windows builds.

So simple #ifdef _WIN32 => little endian decision is easy.

For Linux based platforms you can use <endian.h> header that provides __BYTE_ORDER value which can be __LITTLE_ENDIAN or __BIG_ENDIAN.

Alternative is to do it per architecture. For example, if you have arm (#ifdef __arm__) then it will additionally have __ARMEB__ or __ARMEL__ for big vs little endian. Similar defines exists for Mips and others.

Nowadays Windows supports x86, x64, arm and arm64 instruction sets, it useed to be IA64 and few others, but those are dead. Now it is only 32/64 bit intel+arm.

There are some instruction sets that have one endianness only - x86. And then there are some that can have both - little and big, for example arm. On arm processor you can switch at runtime which endianness you want. Most of arm processors (I'd guess 99%+) runs little.

That said - in practice this all endianess stuff does not matter much. You can simply write code that works the same in any endianess. Then no macros will be needed - no problems and code is simpler. Simple, standard, no-ifdef, no-macro code.

As for which architecture to support - that is up to you. For game target market on Windows you should do only x64. If you want to support Surface/windows arm tablets, then arm64 too. On macOS you should support x64 and arm64. Older x86/arm7 is kind of deprecated nowadays for some time, no need to worry about them.

On Linux it gets tricky, as it supports every single common architecture, but they are sometimes used in very niche area - like routers, or tiny embedded devices. For desktop/mobile usage I would say covering x64, arm64 would give you decent coverage (maybe 70%+ maybe more). Linux often is run on older PC's, so maybe x86 is needed to.

All of these mentioned above will be little-endian. Same applies to current and last-gen consoles: PS4/PS5/XSX/XB1 - all x64 and little-endian. Switch is arm64, little-endian.

There are some extra strange Linux platforms with mips and some powerpc, but you should not care much if you're releasing desktop/mobile game.

Nowadays there is RiscV that's becoming popular. Currently it's not much, but probably soon will be available in many places.

Thanks a lot.

So about writing code that works the same in any endianness... Do you recommend doing that absolutely always, or would you consider making an exception for certain optimizations? For example, I think bitstream decoders can be simpler and faster if you tailor them to the endianness.

I assume you would stay away from those optimizations in general, and make exceptions only in isolated places and when it's absolutely crucial for performance.


Edited by Opoiregwfetags on
Replying to mmozeiko (#25158)