LAPACK

As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems. Some of the implementations are:

Since LAPACK typically calls underlying BLAS routines to perform the bulk of its computations, simply linking to a better-tuned BLAS implementation can be enough to significantly improve performance. As a result, LAPACK is not reimplemented as often as BLAS is.

These projects provide a similar functionality to LAPACK, but with a main interface differing from that of LAPACK:

A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility.
Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs.