Handmade Hero»Forums»Code
John Lorre
26 posts
Matrix inversion methods
So in day 101 Casey went on great length explaining about Matrix inversion and it seemed is is going to use Gauss-Jordan elimination for that.
For small matrices the possibility of using the adjugate/matrix of cofactors.

I am interested to know if there is some advantage to choose one or the other, lets say with at a 4x4 matrix at max.
Mārtiņš Možeiko
2559 posts / 2 projects
Matrix inversion methods
Edited by Mārtiņš Možeiko on
For small matrices (2x2, 3x3) you just write equations directly and use it. Cramer's rule will help you get the equations. Don't do Gauss elimination.

For 4x4 I would say write SSE code, something like this http://download.intel.com/design/PentiumIII/sml/24504301.pdf (Section 5.3, page 8 ) - that is derived from Cramer's rule formula.
(this is bit better SSE code, but will require a bit of cleanup from DirectXMath dependencies)

For bigger matrices do either Gauss elimination which can bet numerically unstable if not implemented correctly. Or do something more advanced based on matrix decomposition (LU, SVD, Cholesky?)
John Lorre
26 posts
Matrix inversion methods
Thank you for your reply an the enlightenment :)

So yeah, that was what I thought, but I am also no expert. And I did wonder why Casey was going for Gauss-Jordan elimination. At least it seemed he wanted to go that route, so I did wonder if there is some benefit to using that.

But I also see we did not really finish the matrix topic yet. Maybe he would have went for something simpler in the end.
Mārtiņš Možeiko
2559 posts / 2 projects
Matrix inversion methods
My guess would be that he chose Gauss elimination because it is very easy to explain and show how it works. More advanced inversion algorithms need more explanation and "strange" terms (like eigenvalue or eigenvector).
Andrew Bromage
183 posts / 1 project
Research engineer, resident maths nerd (Erdős number 3).
Matrix inversion methods
mmozeiko
For bigger matrices do either Gauss elimination which can bet numerically unstable if not implemented correctly. Or do something more advanced based on matrix decomposition (LU, SVD, Cholesky?)

For bigger matrices, do as much as you can to avoid computing an explicit inverse. Inverse matrices tend to be numerically unstable.

Remember, the reason why you think you want an inverse is to solve linear systems of the form Mx = b. There is no point retaining an inverse if you are only using M once; you can do Gaussian elimination using only the space for the original matrix and the b vector. If you are using M multiple times with different b, then LU decomposition (or Cholesky, if M is symmetric) is far more stable than computing an inverse.

Some places where big matrices turn up in games (e.g. fluid simulation, cloth simulation) also tend to have special structure which both makes computing an explicit inverse a bad idea, and may give you something you can exploit to make a better solver. You should especially avoid explicit inverses if M is sparse, because the inverse won't be sparse, and hence will take up far more memory.

They are usually sparse (since "particles" tend to only interact with "particles" which are close by), and symmetric (Newton's second law), so special techniques like preconditioned conjugate gradient solvers can work extremely well. Some matrices are diagonally dominant. Other matrices reflect the geometry of the problem.

I forget who it was who pointed out that linear analysis is finding PhD-level solutions to high school-level problems.