Asynchronous SGD for DNN Training on Shared-Memory Parallel Architectures

TitleAsynchronous SGD for DNN Training on Shared-Memory Parallel Architectures
Publication TypeTech Report
Year of Publication2020
AuthorsLopez, F., E. Chow, S. Tomov, and J. Dongarra
Technical Report Series TitleInnovative Computing Laboratory Technical Report
NumberICL-UT-20-04
Date Published2020-03
InstitutionUniversity of Tennessee, Knoxville
KeywordsAsynchronous iterative methods, Deep learning, gpu, multicore CPU, Stochastic Gradient Descent
Abstract

We present a parallel asynchronous Stochastic Gradient Descent algorithm for shared memory architectures. Different from previous asynchronous algorithms, we consider the case where the gradient updates are not particularly sparse. In the context of the MagmaDNN framework, we compare the parallel efficiency of the asynchronous implementation with that of the traditional synchronous implementation. Tests are performed for training deep neural networks on multicore CPUs and GPU devices.

Project Tags: 
External Publication Flag: