So I got to the opengl point and then decided to get Anton's opengl tutorials.
I get most of it and using his code I already can draw some cube to the screen that XYZ rotates in place, the cam moves around in XZ and mouse does some look around in XY.
The weird part is when I add ANOTHER cube. This other cube will fly around like crazy if I apply any rotation, while the first cube rotates in XYZ but stays in place.
I switched from Anton's math funcs to GLMs but the problem persists.
Ok so main loop looks like this:
input.update(elapsed_seconds, &cam, &object, &object2); auto view_mat = transform(&cam); auto model_mat = transform(&object); auto model_mat2 = transform(&object2); /// first obj shader.use_programme_on_VAO(model.get_VAO()); shader.load_view_matrix(view_mat.m); shader.load_model_matrix(model_mat.m); glBindTexture(GL_TEXTURE_2D, tex1); glDrawElements(GL_TRIANGLES, model.indexes_count(), GL_UNSIGNED_BYTE, (GLvoid*)0); /// second obj shader.load_model_matrix(model_mat2.m); glBindTexture(GL_TEXTURE_2D, tex2); glDrawElements(GL_TRIANGLES, model.indexes_count(), GL_UNSIGNED_BYTE, (GLvoid*)0); // put the stuff we've been drawing onto the display glfwSwapBuffers( g_window );
the transform func is like this
static mat4 transform(Entity* e) { mat4 T, R; T = translate( identity_mat4(), vec3( -e->pos_X, -e->pos_Y, -e->pos_Z ) ); R = rotate_x_deg( T, -e->rot_X ); R = rotate_y_deg( R, -e->rot_Y ); R = rotate_z_deg( R, -e->rot_Z ); return R * T; }
Both camera and objects are an "entity", which is nothing but a position and a rotation float arrays.
The input update just get keyboard and mouse input, and I just pass some values for the rotation of the cubes
void Input::update(float elapsed_seconds, Entity* cam, Entity* obj, Entity* obj2) { get_inputs(); /// in millisec accum_time += elapsed_seconds*1000; float input_delay = 50; if (accum_time > input_delay) { float units_per_sec = 0.005; float speed = units_per_sec * 1000/input_delay; accum_time -= input_delay; if (key_LSHIFT) { speed *= 10; } if (key_W) { cam->pos_Z -= speed; } if (key_S) { cam->pos_Z += speed; } if (key_A) { cam->pos_X -= speed; } if (key_D) { cam->pos_X += speed; } float rotx = cam->rot_X + 0.22*(center_window_y - mouse_y); float roty = cam->rot_Y + 0.22*(center_window_x - mouse_x); cam->rot_X = rotx; cam->rot_Y = roty; /// a rotation each t seconds is deg = 360 / (t*1000/input_delay) obj->rot_X += 0.6; obj->rot_Y += 1.3; obj->rot_Z += 1.8; obj2->rot_X += 0.8; obj2->rot_Y += 1.8; obj2->rot_Z += 1.2; } }
it gives me the effect in the video
any ideas of what is going on here?
thx
What exactly your transform() function is supposed to do? It creates translation matrix T, then multiplies by three rotation matrices, then multiplies with translation again? So you're translating twice - before and after rotation?
thtaq's kinda garbage I left there trying out things, actually I should pass the identity matrix there
static mat4 transform(Entity* e) { mat4 T, R; T = translate( identity_mat4(), vec3( -e->pos_X, -e->pos_Y, -e->pos_Z ) ); R = rotate_x_deg( identity_mat4(), -e->rot_X ); R = rotate_y_deg( R, -e->rot_Y ); R = rotate_z_deg( R, -e->rot_Z ); return R * T; }
these are Anton's funcs
mat4 identity_mat4() { return mat4( 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f ); } // translate a 4d matrix with xyz array mat4 translate( const mat4& m, const vec3& v ) { mat4 m_t = identity_mat4(); m_t.m[12] = v.v[0]; m_t.m[13] = v.v[1]; m_t.m[14] = v.v[2]; return m_t * m; } // rotate around x axis by an angle in degrees mat4 rotate_x_deg( const mat4& m, float deg ) { // convert to radians float rad = deg * ONE_DEG_IN_RAD; mat4 m_r = identity_mat4(); m_r.m[5] = cos( rad ); m_r.m[9] = -sin( rad ); m_r.m[6] = sin( rad ); m_r.m[10] = cos( rad ); return m_r * m; } // rotate around y axis by an angle in degrees mat4 rotate_y_deg( const mat4& m, float deg ) { // convert to radians float rad = deg * ONE_DEG_IN_RAD; mat4 m_r = identity_mat4(); m_r.m[0] = cos( rad ); m_r.m[8] = sin( rad ); m_r.m[2] = -sin( rad ); m_r.m[10] = cos( rad ); return m_r * m; } // rotate around z axis by an angle in degrees mat4 rotate_z_deg( const mat4& m, float deg ) { // convert to radians float rad = deg * ONE_DEG_IN_RAD; mat4 m_r = identity_mat4(); m_r.m[0] = cos( rad ); m_r.m[4] = -sin( rad ); m_r.m[1] = sin( rad ); m_r.m[5] = cos( rad ); return m_r * m; }
but in any case, if I use GLM funcs it gives me the same results
in the case of GLM the funcs look like
static mat4 transform_GLM(Entity* e) { glm::mat4 m; m = glm::translate(m, glm::vec3(e->pos_X, e->pos_Y, e->pos_Z)); m = glm::rotate(m, glm::radians(e->rot_X), {1, 0, 0}); m = glm::rotate(m, glm::radians(e->rot_Y), {0, 1, 0}); m = glm::rotate(m, glm::radians(e->rot_Z), {0, 0, 1}); mat4 mat; float *glmmat = glm::value_ptr(m); for (auto i = 0; i < 16; ++i) { mat.m[i] = glmmat[i]; } return mat; } static mat4 transform_GLM_View(Entity* e) { glm::mat4 m(1.f); m = glm::translate(m, glm::vec3{-e->pos_X, -e->pos_Y, -e->pos_Z}); m = glm::rotate(m, glm::radians(e->rot_X), {1, 0, 0}); m = glm::rotate(m, glm::radians(e->rot_Y), {0, 1, 0}); m = glm::rotate(m, glm::radians(e->rot_Z), {0, 0, 1}); mat4 mat; float *glmmat = glm::value_ptr(m); for (auto i = 0; i < 16; ++i) { mat.m[i] = glmmat[i]; } return mat; }
ps: yeah I noticed that there is a difference in signs between transforming the view mat and the model mat, also the view one begins with the identity matrix
BTW actually trying to answer your question: idk how any of this works, I'm using gut feeling, anton's examples and reading the funcs.
He uses a single y-axis rotation in one example and passes the identity matrix to both the translation and rotation; I just extrapolated it to the other axis like that because they get almost all different parts of the matrices, and it does work with a single cube.
Did I screw up and got lucky because it was just one cube or smth? But the GLM funcs also do not work properly, the same artifact happens. Maybe I should just post the entire code.
ok so finally this solved the problem, the order of the matrix multiplication for objects/models has to be different to that of the camera, also coordinates positive
static mat4 transform_view(Entity* e) { mat4 T, R; T = translate( identity_mat4(), vec3( -e->pos_X, -e->pos_Y, -e->pos_Z ) ); R = rotate_x_deg( identity_mat4(), -e->rot_X ); R = rotate_y_deg( R, -e->rot_Y ); R = rotate_z_deg( R, -e->rot_Z ); return R * T; } static mat4 transform_model(Entity* e) { mat4 T, R; T = translate( identity_mat4(), vec3( e->pos_X, e->pos_Y, e->pos_Z ) ); R = rotate_x_deg( identity_mat4(), e->rot_X ); R = rotate_y_deg( R, e->rot_Y ); R = rotate_z_deg( R, e->rot_Z ); return T * R; }
This thing is complicated af tbh, but now it sounds a bit more intuitive: the camera is moving opposite to everything, so you do normal stuff with stuff and inverted with he camera. Duh! Now why the matrix multiplication has to be inverted (T * R instead of R * T) and why the GLM function didn't work I got no clue!
With these
Entity cam = Entity({ 0.0f, 0.0f, 2.0f }, { 0.0f, 0.0f, 0.0f }); Entity object = Entity({ 0.0f, 0.0f, 0.0f }, { 0.0f, 0.0f, 0.0f }); Entity object2 = Entity({ 0.0f, 0.0f, -3.0f }, { 0.0f, 0.0f, 0.0f });
Now I get the desired effect in the video here:
For the GLM function you still need to do the transformation in the opposite order for the model (or camera, I'm not sure which one).
static mat4 transform_GLM(Entity* e) { glm::mat4 m; m = glm::rotate(m, glm::radians(e->rot_X), {1, 0, 0}); m = glm::rotate(m, glm::radians(e->rot_Y), {0, 1, 0}); m = glm::rotate(m, glm::radians(e->rot_Z), {0, 0, 1}); m = glm::translate(m, glm::vec3(e->pos_X, e->pos_Y, e->pos_Z)); mat4 mat; float *glmmat = glm::value_ptr(m); for (auto i = 0; i < 16; ++i) { mat.m[i] = glmmat[i]; } return mat; }
In the past few days I've been trying to get a better understanding of transformation matrices, and what helps me is to do all the math by hand (even when it's annoying).
The first thing I did was figuring out what conventions to use, and make sure OpenGL and GLSL where doing what I expected. I tried to make things look like what I write when doing the math on paper:
glUniformxxx
);glClipControl
to have the Z coordinates in NDC to be from 0 to 1;I then did some non sense matrix multiply (a matrix containing 1 2 3 4 5...16 multiplied by a row vector containing 17 18 19 20 ), and verified the output in RenderDoc
's Mesh viewer (vs output) to make sure I got the same result as what I computed on paper.
When I was sure every thing went as expected, I started to figure out how transformation worked. I only did orthographic projection, translation and scale at the moment (I've done perspective and rotations in the past, but I never was at ease with it), but doing all the math by hand without matrices and than figuring out how those should be put into matrices helps me get a better understanding and intuition.
For example a matrix that represent scale and translation also encodes the order those operation were done, and it wasn't obvious to me, because if I don't do the math you just have 16 number.
This matrix encodes the scale before the translation sx 0 0 0 0 sy 0 0 0 0 sz 0 tx ty tz 1 This matrix encodes the translation before the scale sx 0 0 0 0 sy 0 0 0 0 sz 0 tx * sx ty * sy tz * sz 1 You can verify this by having a scale matrix and a translation matrix doing the multiply in the two orders.
So I haven't done camera movement yet, but I'm sure if you take the time to figure out the math, you can sort it out.
Camera movement is basically the same stuff. It is the view matrix transform. But it needs to have opposite signs and swapped T * R product at the end.
I highly recommend to get Anton' book and source code, I basically modified his example 9 for more convenience: https://github.com/capnramses/antons_opengl_tutorials_book
The perspective is just a function he already has that you load the projection matrix with. This will stay there in the shader without messing with it.
The view matrix will be the transform of the camera movements.
The model matrix get the transform of the cubes/objects movement.
this is pretty much it, so in main loop, if you want cubes rotating and also camera going here and there, you gotta get inputs for them and update view matrix and corresponding model matrices.
I can put my code here if you want, but basically camera view movements is WSAD and rotation is mouse difference from center screen position.
Just an update here:
I never got the GLM functions to work properly, meddled with it a bit but since Anton's work pretty fine I'm fine with them.
If you want to have a view camera that does what you expect it to do when looking around, (with y-axis grounded) these are the functions I'm using and they work just fine(*):
(*) the caveat here is going to the negative z-axis, then I need to change rot_X sign
/*--------------------------AFFINE MATRIX FUNCTIONS---------------------------*/ // translate a 4d matrix with xyz array mat4 translate( const mat4& m, const vec3& v ) { mat4 m_t = identity_mat4(); m_t.m[12] = v.v[0]; m_t.m[13] = v.v[1]; m_t.m[14] = v.v[2]; return m_t * m; } // rotate around x axis by an angle in degrees mat4 rotate_x_deg( const mat4& m, float deg ) { // convert to radians float rad = deg * ONE_DEG_IN_RAD; mat4 m_r = identity_mat4(); m_r.m[5] = cos( rad ); m_r.m[9] = -sin( rad ); m_r.m[6] = sin( rad ); m_r.m[10] = cos( rad ); return m_r * m; } // rotate around y axis by an angle in degrees mat4 rotate_y_deg( const mat4& m, float deg ) { // convert to radians float rad = deg * ONE_DEG_IN_RAD; mat4 m_r = identity_mat4(); m_r.m[0] = cos( rad ); m_r.m[8] = sin( rad ); m_r.m[2] = -sin( rad ); m_r.m[10] = cos( rad ); return m_r * m; } // rotate around z axis by an angle in degrees mat4 rotate_z_deg( const mat4& m, float deg ) { // convert to radians float rad = deg * ONE_DEG_IN_RAD; mat4 m_r = identity_mat4(); m_r.m[0] = cos( rad ); m_r.m[4] = -sin( rad ); m_r.m[1] = sin( rad ); m_r.m[5] = cos( rad ); return m_r * m; } mat4 transform_view(Entity* e) { mat4 T, R; T = translate( identity_mat4(), vec3( -e->pos_X, -e->pos_Y, -e->pos_Z ) ); R = rotate_x_deg( identity_mat4(), e->rot_X ); R = rotate_y_deg( R, e->rot_Y ); R = rotate_z_deg( R, e->rot_Z ); return R * T; } mat4 transform_model(Entity* e) { mat4 T, R; T = translate( identity_mat4(), vec3( e->pos_X, e->pos_Y, e->pos_Z ) ); R = rotate_x_deg( identity_mat4(), e->rot_X ); R = rotate_y_deg( R, e->rot_Y ); R = rotate_z_deg( R, e->rot_Z ); return T * R; }
if (key_W) { cam->pos_X += -cos(DEG_TO_RAD(cam->rot_Y + 90))*speed; cam->pos_Z += -sin(DEG_TO_RAD(cam->rot_Y + 90))*speed; } if (key_S) { cam->pos_X += cos(DEG_TO_RAD(cam->rot_Y + 90))*speed; cam->pos_Z += sin(DEG_TO_RAD(cam->rot_Y + 90))*speed; } if (key_A) { cam->pos_X += -cos(DEG_TO_RAD(cam->rot_Y))*speed; cam->pos_Z += -sin(DEG_TO_RAD(cam->rot_Y))*speed; } if (key_D) { cam->pos_X += cos(DEG_TO_RAD(cam->rot_Y))*speed; cam->pos_Z += sin(DEG_TO_RAD(cam->rot_Y))*speed; } float axis_y = 1.0, axis_x = 1.0; if (cam->pos_Z < 0) { axis_y = -1.0; } /// looking up and down, should clamp at 180 deg for realism cam->rot_X += 0.22*(mouse_y - center_window_y)*axis_y; /// looking left -- right, clamping at 360 deg cam->rot_Y += 0.22*(mouse_x - center_window_x)*axis_x; if (cam->rot_X > 360) cam->rot_X = 0; else if (cam->rot_X < 0) cam->rot_X = 360; if (cam->rot_Y > 360) cam->rot_Y = 0; else if (cam->rot_Y < 0) cam->rot_Y = 360;
Somewhere there you reposition mouse at center screen, or use another method for getting mouse rotation if you will.
This is not perfect because coordinates change to negative, you gotta need to update the signs of rotational axis according to positional axis, which I think at some point is cumbersome. I just updated when going z-negative, but it is a bit clunky because it assumes you are looking backwards. I don't know how to solve this in a generalized way yet.
EDIT:
Oh I forgot that the perspective to load in the projection matrix
#define ONE_DEG_IN_RAD ( 2.0 * M_PI ) / 360.0 // 0.017444444 // input variables float near = 0.1f; // clipping plane float far = 100.0f; // clipping plane float fov = 67.0f * ONE_DEG_IN_RAD; // convert 67 degrees to radians float aspect = (float)g_gl_width / (float)g_gl_height; // aspect ratio // matrix components float inverse_range = 1.0f / tan( fov * 0.5f ); float Sx = inverse_range / aspect; float Sy = inverse_range; float Sz = -( far + near ) / ( far - near ); float Pz = -( 2.0f * far * near ) / ( far - near ); GLfloat proj_mat[] = { Sx, 0.0f, 0.0f, 0.0f, 0.0f, Sy, 0.0f, 0.0f, 0.0f, 0.0f, Sz, -1.0f, 0.0f, 0.0f, Pz, 0.0f };
To anyone reading this, I just updated some parts.
First, eliminate useless matrix multiplications (preserving the order of translation vs rotations for models vs camera):
mat4 transform_view(Entity* e) { mat4 mat; mat = translate( identity_mat4(), vec3( -e->pos_X, -e->pos_Y, -e->pos_Z ) ); mat = rotate_x_deg( mat, e->rot_X ); mat = rotate_y_deg( mat, e->rot_Y ); mat = rotate_z_deg( mat, e->rot_Z ); return mat; } mat4 transform_model(Entity* e) { mat4 mat; mat = rotate_x_deg( identity_mat4(), e->rot_X ); mat = rotate_y_deg( mat, e->rot_Y ); mat = rotate_z_deg( mat, e->rot_Z ); mat = translate( mat, vec3( e->pos_X, e->pos_Y, e->pos_Z ) ); return mat; }
Notice also that in input handling the rotation fix below is wrong
if (cam->pos_Z < 0) { axis_y = -1.0; } cam->rot_X += 0.22*(mouse_y - last_mouse_y)*axis_y;
to fix the proper camera movement when we look backwards, it is not a function of the negative z axis position, but of the rotation around the y axis, so this should be replaced by
if ((cam->rot_Y > -270 && cam->rot_Y < -90) || (cam->rot_Y > 90 && cam->rot_Y < 270)) { axis_y = -1.0; } cam->rot_X += 0.22*(mouse_y - last_mouse_y)*axis_y;
this is where the cosine changes signal to minus
with these fixes, everything seems smooth, camera is doing the right thing, going the right directions
I'm not sure what could be optimized here, no idea what quaternions would do in these rotations, so if you know how to explain please feel free