Batch QR Factorization on GPUs: Design, Optimization, and Tuning

TitleBatch QR Factorization on GPUs: Design, Optimization, and Tuning
Publication TypeBook Chapter
Year of Publication2022
AuthorsAbdelfattah, A., S. Tomov, and J. Dongarra
EditorGroen, D., C. de Mulatier, M. PaszyƄski, V. V. Krzhizhanovskaya, J. J. Dongarra, and P. M. A. Sloot
Book Title Lecture Notes in Computer Science
Volume13350
Date Published2022-06
PublisherSpringer International Publishing
CityCham
ISBN Number978-3-031-08750-9
KeywordsBatch linear algebra, GPU computing, QR factorization
Abstract

QR factorization of dense matrices is a ubiquitous tool in high performance computing (HPC). From solving linear systems and least squares problems to eigenvalue problems, and singular value decompositions, the impact of a high performance QR factorization is fundamental to computer simulations and many applications. More importantly, the QR factorization on a batch of relatively small matrices has acquired a lot of attention in sparse direct solvers and low-rank approximations for Hierarchical matrices. To address this interest and demand, we developed and present a high performance batch QR factorization for Graphics Processing Units (GPUs). We present a multi-level blocking strategy that adjusts various algorithmic designs to the size of the input matrices. We also show that following the LAPACK QR design convention, while still useful, is significantly outperformed by unconventional code structures that increase data reuse. The performance results show multi-fold speedups against the state of the art libraries on the latest GPU architectures from both NVIDIA and AMD.

URLhttps://link.springer.com/chapter/10.1007/978-3-031-08751-6_5
DOI10.1007/978-3-031-08751-6_5
External Publication Flag: