Efficient Processing of Convolutional Neural Networks on SW26010 - Network and Parallel Computing Access content directly
Conference Papers Year : 2019

Efficient Processing of Convolutional Neural Networks on SW26010

Abstract

Artificial intelligence has developed rapidly in recent years. Deep neural networks are the basis of many artificial intelligence applications. How to accelerate the computational processing of deep neural networks is very important. To explor the potential for accelerating the process deep neural networks on various hardware platforms, we propose a convolutional neural network optimization method based on the Weight-Stationary for SW26010 processor. We re-circulate convolution loops and use hybrid DMA transmission mode to increase memory bandwidth and reduce memory access overhead. On top of those, further optimizations are done based on register communication, asynchronous DMA transfer double buffering, instruction scheduling and other schemes. Finally, we achieve a double-precision convolution performance over 2.4 Tflops, achieving 81% of the processor’s peak performance. In multiple parameters, we achieve a proforamnce acceleration of $$2.4-4.0\times $$ speedup compared to the Tesla K80 GPU with cuDNNv7.
Fichier principal
Vignette du fichier
486810_1_En_26_Chapter.pdf (487.63 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03770543 , version 1 (06-09-2022)

Licence

Attribution

Identifiers

Cite

Yi Zhang, Bing Shu, Yan Yin, Yawei Zhou, Shaodi Li, et al.. Efficient Processing of Convolutional Neural Networks on SW26010. 16th IFIP International Conference on Network and Parallel Computing (NPC), Aug 2019, Hohhot, China. pp.316-321, ⟨10.1007/978-3-030-30709-7_26⟩. ⟨hal-03770543⟩
23 View
55 Download

Altmetric

Share

Gmail Facebook X LinkedIn More