For productivity, we should also encapsulate these interface functions in the R environment, so that the technical changes between the CPU and GPU are transparent to the R user. To solve this problem, we need to build an interface to bridge R and CUDA the development layer of Figure 1 shows. While the CUDA ecosystem provides many ways to accelerate applications, R cannot directly call CUDA libraries or launch CUDA kernel functions. The second approach is to use the GPU through CUDA directly. On the other hand, the number of GPU packages is currently limited, quality varies, and only a few domains are covered. These packages are very easy to install and use. Examples include gputools and cudaBayesreg. The first approach is to use existing GPU-accelerated R packages listed under High-Performance and Parallel Computing with R on the CRAN site.
0 Comments
Leave a Reply. |