Abstract
GPUs have over the last decades gone from esoteric and experimental to becoming the work-horse of supercomputers and HPC machines. Today, half of the ten fastest supercomputers (including number one) use NVIDIA GPUs for performance, and nine of the top ten most energy efficient supercomputers use NVIDIA GPUs. Developing efficient software for these platforms is a major challenge, but the complexity can be significantly reduced by using efficient development environments and software ecosystems (Wilson et al. 2014).
In this work, we examine the performance and energy efficiency when using Jupyter Notebooks and Python for developing HPC codes running on the GPU. We investigate the portability of the improvements between CUDA and OpenCL; between GPU generations; and between low-end, mid-range and high-end GPUs. Our findings show that the impact of using Python is negligible for our applications.
In this work, we examine the performance and energy efficiency when using Jupyter Notebooks and Python for developing HPC codes running on the GPU. We investigate the portability of the improvements between CUDA and OpenCL; between GPU generations; and between low-end, mid-range and high-end GPUs. Our findings show that the impact of using Python is negligible for our applications.