OpenGL vs D3D11 rendering

I'm trying to render a square at specific locations within a window and so far was able to successfully do so using OpenGL but, for some reason, having trouble doing the same with D3D11.

Here's how I do it using OpenGL:

Vertex shader:

static char vs[] =
	"#version 450 core                                                    \n"
	"layout (location = 0) in vec2 aVertex;                               \n"
	"layout (location = 1) in vec2 aTexture;                              \n"
	"                                                                     \n"
	"uniform mat4 uProjection;                                            \n"
	"uniform mat4 uModel;                                                 \n"
	"out vec2 vTexture;                                                   \n"
	"                                                                     \n"
	"void main() {                                                        \n"
	"	gl_Position = uProjection * uModel * vec4(aVertex, 0.0, 1.0); \n"
	"	vTexture = aTexture;                                          \n"
	"}                                                                    \n";

Fragment shader:

static char fs[] =
	"#version 450 core                                                   \n"
	"out vec4 FragColor;                                                 \n"
	"                                                                    \n"
	"uniform vec3 uColor;                                                \n"
	"uniform sampler2D uSampler;                                         \n"
	"in vec2 vTexture;                                                   \n"
	"                                                                    \n"
	"void main() {                                                       \n"
	"	FragColor = vec4(uColor, 1.0) * texture(uSampler, vTexture); \n"
	"}                                                                   \n";

The vertices and their corresponding texture coordinates:

GLfloat vertices[] = {
	// Position   // Texture
	0.0f,  1.0f,  0.0f, 1.0f,
	1.0f,  0.0f,  1.0f, 0.0f,
	0.0f,  0.0f,  0.0f, 0.0f,
	
	0.0f,  1.0f,  0.0f, 1.0f,
	1.0f,  1.0f,  1.0f, 1.0f,
	1.0f,  0.0f,  1.0f, 0.0f
};

For projection, I use an orthographic matrix with a function I implemented myself and return the result in column-major order like so:

static mat4 OrthographicProjection(float left, float right, float bottom, float top, float near,
                                   float far) {
	float a = -(right + left) / (right - left);
	float b = -(top + bottom) / (top - bottom);
	float c = -(far + near) / (far - near);
	
	mat4 result = {{
		2.0f / (right - left),	0.0f,			0.0f,			a,
		0.0f,			2.0f / (top - bottom),	0.0f,			b,
		0.0f,			0.0f,			-2.0f / (far - near),	c,
		0.0f,			0.0f,			0.0f,			1.0f
	}};
	
	return ColumnMajorOrder(result);
}

I then call it and set the uProjection uniform to its value:

mat4 ortho = OrthographicProjection(0.0f, window_width, window_height, 0.0f, -1.0f, 1.0f);
GLint location = glGetUniformLocation(shaderProgram, "uProjection");
glUniformMatrix4fv(location, 1, GL_FALSE, ortho.m);

And, finally, to draw the square, I first apply the transformations I want to it using the Transform() function which sets the uModel value, and then render it using the RenderSquare() function which sets the uColor value and draws the shape:

static void Transform(int32_t x, int32_t y, uint32_t width, uint32_t height, uint32_t angle) {
	mat4 model = Identity();
	model = Translate(model, x, y, 0.0f);
	
	model = Translate(model, 0.5f * width, 0.5f * height, 0.0f);
	model = Rotate(model, angle, 0.0f, 0.0f, 1.0f);
	model = Translate(model, -0.5f * width, -0.5f * height, 0.0f);
	
	model = Scale(model, width, height, 0.0f);
	
	GLint program;
	glGetIntegerv(GL_CURRENT_PROGRAM, &program);
	GLint location = glGetUniformLocation(program, "uModel");
	glUniformMatrix4fv(location, 1, GL_FALSE, model.m);
}

static void RenderSquare(int32_t x, int32_t y, uint32_t height, uint32_t width, uint32_t angle, uint32_t color) {
	Transform(x, y, width, height, angle);
	
	float r = ((color >> 16) & 255) / 255.0f;
	float g = ((color >> 8) & 255) / 255.0f;
	float b = ((color >> 0) & 255) / 255.0f;
	
	GLint program;
	glGetIntegerv(GL_CURRENT_PROGRAM, &program);
	GLint location = glGetUniformLocation(program, "uColor");
	glUniform3f(location, r, g, b);

	uint8_t image[4] = {255, 255, 255, 255};
	LoadTexture("uSampler", image, 1, 1);
	glDrawArrays(GL_TRIANGLES, 0, 6);
}

Here's how I do it using D3D11:

HLSL shader:

static char hlsl[] =
	"cbuffer ProjectionBuffer : register(b0) {                                      \n"
	"	float4x4 uProjection;                                                   \n"
	"}                                                                              \n"
	"                                                                               \n"
	"cbuffer ModelBuffer : register(b1) {                                           \n"
	"	float4x4 uModel;                                                        \n"
	"}                                                                              \n"
	"                                                                               \n"
	"struct VS_INPUT {                                                              \n"
	"	float2 position : POSITION;                                             \n"
	"	float2 texcoord : TEXCOORD;                                             \n"
	"};                                                                             \n"
	"                                                                               \n"
	"struct PS_INPUT {                                                              \n"
	"	float4 position : SV_POSITION;                                          \n"
	"	float2 texcoord : TEXCOORD;                                             \n"
	"};                                                                             \n"
	"                                                                               \n"
	"PS_INPUT vs(VS_INPUT input) {                                                  \n"
	"	PS_INPUT output;                                                        \n"
	"                                                                               \n"
	"	float4 pos = float4(input.position, 0.0f, 1.0f);                        \n"
	"	output.position = mul(uProjection, mul(uModel, pos));                   \n"
	"	output.texcoord = input.texcoord;                                       \n"
	"                                                                               \n"
	"	return output;                                                          \n"
	"}                                                                              \n"
	"                                                                               \n"
	"Texture2D tex : register(t0);                                                  \n"
	"SamplerState samplerState : register(s0);                                      \n"
	"                                                                               \n"
	"cbuffer ColorBuffer : register(b2) {                                           \n"
	"	float3 uColor;                                                          \n"
	"}                                                                              \n"
	"                                                                               \n"
	"float4 ps(PS_INPUT input) : SV_TARGET {                                        \n"
	"	return float4(uColor, 1.0f) * tex.Sample(samplerState, input.texcoord); \n"
	"}                                                                              \n";

The same vertices and their corresponding texture coordinates:

float vertices[] = {
	// Position   // Texture
	0.0f,  1.0f,  0.0f, 1.0f,
	1.0f,  0.0f,  1.0f, 0.0f,
	0.0f,  0.0f,  0.0f, 0.0f,
	
	0.0f,  1.0f,  0.0f, 1.0f,
	1.0f,  1.0f,  1.0f, 1.0f,
	1.0f,  0.0f,  1.0f, 0.0f
};

The same orthographic matrix used for projection which I set the uProjection buffer value's to:

mat4 ortho = OrthographicProjection(0.0f, window_width, window_height, 0.0f, -1.0f, 1.0f);
	
D3D11_BUFFER_DESC desc = {};
desc.ByteWidth      = sizeof(ortho);
desc.Usage          = D3D11_USAGE_DYNAMIC;
desc.BindFlags      = D3D11_BIND_CONSTANT_BUFFER;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
		
ID3D11Device_CreateBuffer(render_device, &desc, 0, &projection_buffer);

D3D11_MAPPED_SUBRESOURCE mapped;
ID3D11DeviceContext_Map(render_context, (ID3D11Resource *) projection_buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
memcpy(mapped.pData, ortho.m, sizeof(ortho));
ID3D11DeviceContext_Unmap(render_context, (ID3D11Resource *) projection_buffer, 0);
	
ID3D11DeviceContext_VSSetConstantBuffers(render_context, 0, 1, &projection_buffer);

And, finally, to draw the square, I do the exact same thing which is to call the Transform() function to set the uModel buffer value to all the applied transformations and then render it using the RenderSquare() function which sets the uColor buffer's value and draws the shape:

static void Transform(int32_t x, int32_t y, uint32_t width, uint32_t height, uint32_t angle) {
	mat4 model = Identity();
	model = Translate(model, x, y, 0.0f);
	
	model = Translate(model, 0.5f * width, 0.5f * height, 0.0f);
	model = Rotate(model, angle, 0.0f, 0.0f, 1.0f);
	model = Translate(model, -0.5f * width, -0.5f * height, 0.0f);
	
	model = Scale(model, width, height, 0.0f);
	
	D3D11_BUFFER_DESC desc = {};
	desc.ByteWidth      = sizeof(model);
	desc.Usage          = D3D11_USAGE_DYNAMIC;
	desc.BindFlags      = D3D11_BIND_CONSTANT_BUFFER;
	desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
	
	ID3D11Buffer *buffer;	
	ID3D11Device_CreateBuffer(render_device, &desc, 0, &buffer);
	
	D3D11_MAPPED_SUBRESOURCE mapped;
	ID3D11DeviceContext_Map(render_context, (ID3D11Resource *) buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
	memcpy(mapped.pData, model.m, sizeof(model));
	ID3D11DeviceContext_Unmap(render_context, (ID3D11Resource *) buffer, 0);
	
	ID3D11DeviceContext_VSSetConstantBuffers(render_context, 1, 1, &buffer);
}

static void RenderSquare(int32_t x, int32_t y, uint32_t height, uint32_t width, uint32_t angle, uint32_t color) {
	Transform(x, y, width, height, angle);
	
	float rgba[] = {
		((color >> 16) & 255) / 255.0f,
		((color >> 8) & 255) / 255.0f,
		((color >> 0) & 255) / 255.0f,
		1.0f
	};
	
	D3D11_BUFFER_DESC desc = {};
	desc.ByteWidth      = sizeof(rgba);
	desc.Usage          = D3D11_USAGE_DYNAMIC;
	desc.BindFlags      = D3D11_BIND_CONSTANT_BUFFER;
	desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
	
	ID3D11Buffer *buffer;	
	ID3D11Device_CreateBuffer(render_device, &desc, 0, &buffer);
		
	D3D11_MAPPED_SUBRESOURCE mapped;
	ID3D11DeviceContext_Map(render_context, (ID3D11Resource *) buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
	memcpy(mapped.pData, rgba, sizeof(rgba));
	ID3D11DeviceContext_Unmap(render_context, (ID3D11Resource *) buffer, 0);
	
	ID3D11DeviceContext_PSSetConstantBuffers(render_context, 2, 1, &buffer);
	
	uint8_t image[4] = {255, 255, 255, 255};
	LoadTexture(image, 1, 1);
	
	ID3D11DeviceContext_Draw(render_context, 6, 0);
}

I do want to point out that if I just set the output.position in the shader like so:

output.position = float4(input.position, 0.0f, 1.0f); 

The square is rendered where it supposed to be, I just can't apply any transformations to it since I don't use any matrices...

This leads me to believe that there's some subtle difference between the coordinate systems of these two APIs but I can't seem to find the difference :(

What does "The square is rendered where it supposed to be" means? If it is rendered where it supposed to be then the transformation is applied or not?

And what does "since I don't use any matrices" means? Because you use two float4x4 matrices in hlsl shader.

I'm confused about what exactly is not working for you. Can you share RenderDoc capture and explain what exactly shows up incorrectly in output?

I'm not sure the orthographic matrix is correct.

Emphasis on "not sure".

DirectX uses a NDC space with the depth axis (z) between 0 (near plane) and 1 (far plane), not -1 to 1. So instead of -2/(far-near) you want -1/(far-near) (if using left handed it would be 1/(far-near)). But I don't think it will fix your issue as your z coordinates are always 0.

Another thing that I'm not sure (and will not fix you issue) is if you need those computations in a, b and c. I believe that those computation are to "move" the viewport without moving the "scene". If you always use (0, 0) for left and bottom it won't matter. But if you use left and bottom with other values, you'll move a "window" around the scene, which might be what you want, but the other option is to move the viewport around while keeping the same view of the scene (it's probably not clear what I'm talking about). To do that you'd just pass -1 in both a and b, and c would be -near/(far-near). I prefer to use width and height to make it clear that I'm not "moving the viewport" around.

If you could share the whole code (the smallest thing that reproduce the issue and is easy to compile) it would help finding what's going on.


Edited by Simon Anciaux on

If you want exact same behavior for GL and D3D for handling NDC range then there is glClipControl(GL_LOWER_LEFT, GL_ZERO_TO_ONE) call you can do.


Replying to mrmixer (#30459)

Yes, I know. But joystick is using the same matrix for both OpenGL and DirectX, and they are trying to make it work on DirectX. I don't think there is a way to configure clip space in directx ? I was just pointing a difference between the two.


Replying to mmozeiko (#30460)