Algorithm-based Fault Tolerance for Dense Matrix Factorizations, Multiple Failures, and Accuracy

TitleAlgorithm-based Fault Tolerance for Dense Matrix Factorizations, Multiple Failures, and Accuracy
Publication TypeJournal Article
Year of Publication2015
AuthorsBouteiller, A., T. Herault, G. Bosilca, P. Du, and J. Dongarra
Secondary AuthorsGibbons, P. B.
JournalACM Transactions on Parallel Computing
Volume1
Issue2
Number10
Pagination10:1-10:28
Date Published2015-01
KeywordsABFT, algorithms, fault-tolerance, High Performance Computing, linear algebra
AbstractDense matrix factorizations, such as LU, Cholesky and QR, are widely used for scientific applications that require solving systems of linear equations, eigenvalues and linear least squares problems. Such computations are normally carried out on supercomputers, whose ever-growing scale induces a fast decline of the Mean Time To Failure (MTTF). This paper proposes a new hybrid approach, based on Algorithm-Based Fault Tolerance (ABFT), to help matrix factorizations algorithms survive fail-stop failures. We consider extreme conditions, such as the absence of any reliable node and the possibility of losing both data and checksum from a single failure. We will present a generic solution for protecting the right factor, where the updates are applied, of all above mentioned factorizations. For the left factor, where the panel has been applied, we propose a scalable checkpointing algorithm. This algorithm features high degree of checkpointing parallelism and cooperatively utilizes the checksum storage leftover from the right factor protection. The fault-tolerant algorithms derived from this hybrid solution is applicable to a wide range of dense matrix factorizations, with minor modifications. Theoretical analysis shows that the fault tolerance overhead decreases inversely to the scaling in the number of computing units and the problem size. Experimental results of LU and QR factorization on the Kraken (Cray XT5) supercomputer validate the theoretical evaluation and confirm negligible overhead, with- and without-errors. Applicability to tolerate multiple failures and accuracy after multiple recovery is also considered.
DOI10.1145/2686892
Project Tags: