Text
Dependency-Aware Tensor Scheduler for Industrial AI Applications : Dymem—An Aggressive Data-Swapping Policy for Training Nonlinear Deep Neural Networks
Artificial intelligence (AI) applications based on deep neural networks (DNNs) have been widely applied in industry, e.g., in natural language processing and computer vision, among other fields. Researchers and industry practitioners typically use GPUs to train complex, hundred-layer deep learning (DL) networks. However, as the networks become wider and deeper, the limited GPU memory becomes a significant bottleneck, restricting the size of the networks to be trained. In the training of DNN-based AI applications, the intermediate layer outputs are the major contributors to the memory footprint. Various data-swapping techniques, such as the offloading and prefetching of intermediate layer outputs, are proposed to overcome the GPU memory shortage by utilizing the CPU dynamic random-access memory (DRAM) as an external buffer for the GPU.
Tidak ada salinan data
Tidak tersedia versi lain