So basically to make a lightspot I'm following the tutorials in learnopengl.com.

Basically the vertex shader would be something like

#version 410 layout (location = 0) in vec3 points; layout (location = 1) in vec3 normals; layout (location = 2) in vec2 texcoords; uniform mat4 proj_mat, view_mat, model_mat; out vec3 frag_pos; out vec3 face_normal; out vec2 tex_pos; void main () { tex_pos = texcoords; face_normal = normals; frag_pos = vec3(model_mat * vec4(points, 1.0)); gl_Position = proj_mat * view_mat * vec4(frag_pos, 1.0); }

And the fragment shader:

#version 410 in vec3 face_normal; in vec3 frag_pos; in vec2 tex_pos; uniform sampler2D basic_texture; uniform vec3 light_pos; out vec4 frag_color; void main() { vec4 texel = texture(basic_texture, tex_pos); vec3 objectColor = vec3(texel); vec3 lightColor = vec3(1.0, 1.0, 1.0); // ambient float ambientStrength = 0.25; vec3 ambient = ambientStrength * lightColor; // diffuse vec3 norm = normalize(face_normal); vec3 lightDir = normalize(light_pos - frag_pos); float diff = max(dot(norm, lightDir), 0.0); vec3 diffuse = diff * lightColor; vec3 result = (ambient + diffuse) * objectColor; frag_color = vec4(result, 1.0); }

This works pretty well illuminating the textures and etc. It also works well if the light position changes, but this does NOT work at all if my objects move around and I have no idea why.

For instance, if my cube is rotating, the faces that would have light with the initial position do not update at all.

Notice that the direction of light should update with frag_pos, that comes from the transform of the model's matrix, each loop, but it doesn't.

Is this because I need to actually update every single normal given the model translation/rotation transforms?