mmozeiko
For bigger matrices do either Gauss elimination which can bet numerically unstable if not implemented correctly. Or do something more advanced based on matrix decomposition (LU, SVD, Cholesky?)
For bigger matrices, do as much as you can to avoid computing an explicit inverse. Inverse matrices tend to be numerically unstable.
Remember, the reason why you think you want an inverse is to solve linear systems of the form Mx = b. There is no point retaining an inverse if you are only using M once; you can do Gaussian elimination using only the space for the original matrix and the b vector. If you are using M multiple times with different b, then LU decomposition (or Cholesky, if M is symmetric) is far more stable than computing an inverse.
Some places where big matrices turn up in games (e.g. fluid simulation, cloth simulation) also tend to have special structure which both makes computing an explicit inverse a bad idea, and may give you something you can exploit to make a better solver. You should especially avoid explicit inverses if M is sparse, because the inverse won't be sparse, and hence will take up far more memory.
They are usually sparse (since "particles" tend to only interact with "particles" which are close by), and symmetric (Newton's second law), so special techniques like preconditioned conjugate gradient solvers can work extremely well. Some matrices are diagonally dominant. Other matrices reflect the geometry of the problem.
I forget who it was who pointed out that linear analysis is finding PhD-level solutions to high school-level problems.