Share via


Math under the hood

We recently had a company called Simulia visiting us at a deep dive lab.  Their product, Abaqus, a commercial software package for finite element analysis prompted me to quickly brush up on my numerical computation knowledge.  After all, you can't compute without numerical methods under the hood!  While Abaqus specializes in nonlinear solvers, I figured that I should share some notes on linear, and just solvers in general.  In HPC many applications are trying to solve a very large set of linear equations.  That is something we all have seen before, 2x + y = 0, x - y =0 is a set of simple linear equations.  The solution is simply (x=0; y=0).  To express in matrix form, we have A x = b.

A, the matrix is ((2,1),(1-1)), then the X vector is simply (x, y), b is (0,0) in this case.

Imagine there are thousands or millions of variables in a large set of linear equations.  Then, we have a (time/space) complexity problem on our hand.  That means, we'll need some parallel processing.  Fortunately, there are already hundreds pieces of code that already do this for us.  These codes usually fall into two categories, the direct method solvers and the iterative method solvers.

Direct methods:

There are some variants of Gaussian elimination algorithms with different pivoting approaches. Parallel implementation of direct methods are difficult in general. General use software packages include superLU, MUMPS. There are other software systems that can deal with some specialized problems.  Direct methods are expensive, O(n3) operations, Gaussian elimination, LU factorization. And Direct methods can change the sparsity patterns of the matrix. However, they are robust and accurate. Parallel performance is usually good up to maybe Num of processors of 64. Using more than a few dozen processors may be counter productive. 

Iterative methods:

There are quite a few iterative methods out there. Common ones are: Richardson, Jacobi, Gauss-seidel, conjugate gradient (CG), generalized minimal residual, SOR, etc.  Most commonly used are GMRES and CG.  Iterative methods don't usually change sparsity pattern of the original linear operator (the A matrix in Ax=b) This property may be quite desirable in many large applications. The downside of iterative solvers is they are SLOW convergence and dependence of their convergence properties on the form of linear operator.  In other words, iterative methods may not work at all on for some matrices. Typically, parallel implementation of a parallel iterative method involves some pre-conditioner.

Preconditioning techniques are so-called Scharz methods that are based on domain decomposition. Parallel performance of iterative solvers with the right choice of pre-conditioner can be quite good showing linear speed up with up to a thousand of sub domains or processors. However, successful applications of a generic parallel iterative solvers to some specialized linear system is still somewhat an art. Domain decomposition is an active area of research as more and more application developers turn to parallel processing.

Next time, I'll have some non-linear solver notes up.  I promise it will be shorter.  If you are itching to see some code examples, I've got that coming up as well.  Thanks for reading.

Comments

  • Anonymous
    June 30, 2008
    The comment has been removed
  • Anonymous
    February 24, 2009
    When it comes to HPC development, linear algebra libraries are the essential building blocks.  You