Difference between revisions of "GPU Support"

From GridPP Wiki
Jump to: navigation, search
Line 2: Line 2:
 
* I am not sure if GPUs are available at other grid sites, or have been tested or used much there.
 
* I am not sure if GPUs are available at other grid sites, or have been tested or used much there.
 
* These are very much "experimental" at the moment and their use has not been well tested. You should have a very good understanding of how your GPU code works before trying to run it on the grid.
 
* These are very much "experimental" at the moment and their use has not been well tested. You should have a very good understanding of how your GPU code works before trying to run it on the grid.
 +
* Keep in mind though, that the worker nodes will only have the lower level Nvidia drivers installed, your job environment will need to provide Cuda support either using something like Anaconda or perhaps by means of a container image (currently untested).
 
* If you require support please email lcg-site-admin at imperial.ac.uk.
 
* If you require support please email lcg-site-admin at imperial.ac.uk.
  

Revision as of 09:08, 12 July 2022

  • We have recently added some grid nodes with Nvidia GA100 GPUs at UKI-LT2-IC-HEP.
  • I am not sure if GPUs are available at other grid sites, or have been tested or used much there.
  • These are very much "experimental" at the moment and their use has not been well tested. You should have a very good understanding of how your GPU code works before trying to run it on the grid.
  • Keep in mind though, that the worker nodes will only have the lower level Nvidia drivers installed, your job environment will need to provide Cuda support either using something like Anaconda or perhaps by means of a container image (currently untested).
  • If you require support please email lcg-site-admin at imperial.ac.uk.

Anaconda Example Using DIRAC

The following example is based on the Anaconda python distribution and some familiarity with this is probably desirable. Through Anaconda we can obtain "cudatoolkit" which provides support for the GPU and "numba" which is python library that you can use to make use of the GPU.

[
JobName = "gpu_test";
Executable = "gpu_test.sh";
Arguments = "";
StdOutput = "StdOut";
StdError = "StdErr";
InputSandbox = {"gpu_test.sh","gpu_test.py","LFN:/gridpp/user/d/dan.whitehouse/Anaconda3-2022.05-Linux-x86_64.sh"};
OutputSandbox = {"StdOut","StdErr"};
Site = "LCG.UKI-LT2-IC-HEP.uk";
Tags = {"GPU"}
]

Our bash script is set as the executable in the JDL above and contains:

#!/bin/sh
./Anaconda3-2022.05-Linux-x86_64.sh -p ${PWD}/gputest -b
source ${PWD}/gputest/etc/profile.d/conda.sh
conda info -e
conda activate base
conda install cudatoolkit numba
./gpu_test.py

Here we install Anaconda using the script that we have downloaded and then referenced in our InputSandbox. This is simply the default installation script you can download from the Anaconda website. In the JDL this has been uploaded to my personal share in the gridpp section of our SE - we do this rather than uploading with the job since the Anaconda installer is quite large. Once installed we can list our python environments and then activate the "base" environment. We then install the cudatoolkit so we can make use of the GPU, and the "numba" package. With those dependencies installed in our environment, we can then execute our python script which simply contains:

#!/usr/bin/env python
from numba import cuda
print(cuda.gpus)

If we submit the job and look at the last excerpt of our output:

<Managed Device 0>

We can see that we can indeed access our GPU using python.