Abstract: We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.
Comments: | Camera ready for ICLR2015 |
Subjects: | Machine Learning (cs.LG) ; Distributed, Parallel, and Cluster Computing (cs.DC); Neural and Evolutionary Computing (cs.NE) |
Cite as: | arXiv:1412.7580 [cs.LG] |
(or arXiv:1412.7580v3 [cs.LG] for this version) | |
https://doi.org/10.48550/arXiv.1412.7580 |
Focus to learn more
arXiv-issued DOI via DataCiteFrom: Nicolas Vasilache [view email]
[v1] Wed, 24 Dec 2014 01:31:36 UTC (1,101 KB)
[v2] Tue, 30 Dec 2014 16:55:04 UTC (1,100 KB)
[v3] Fri, 10 Apr 2015 20:01:00 UTC (1,101 KB)
View a PDF of the paper titled Fast Convolutional Nets With fbfft: A GPU Performance Evaluation, by Nicolas Vasilache and 5 other authors