Enabling application processing on a graphics processing unit (Linux, Windows only)

If you have the correct systems requirements, you can offload some application processing to a general-purpose graphics processing unit (GPU). By enabling a system property, suitable workloads are moved from the CPU to the GPU for processing. You can also set an option on the command line that causes the Just-In-Time (JIT) compiler to offload certain processing tasks to the GPU.

Before you begin

Check that your system meets the necessary hardware and software requirements. For more information, see GPU system requirements (Linux, Windows only).

About this task

Some application processing tasks can benefit from processing data on a GPU instead of the CPU, provided the workload is of a sufficient size to justify moving the data. If you can determine exactly when a GPU could be used, you can develop applications that use the available application programming interfaces to offload specific tasks. Alternatively you can let the virtual machine (VM) make this decision automatically by setting a system property on the command line. The JIT can also offload certain processing tasks based on performance heuristics.

Procedure

  1. Linux® only: Set the LD_LIBRARY_PATH to point to the CUDA library. For example, export LD_LIBRARY_PATH=<CUDA_LIBRARY_PATH>:$LD_LIBRARY_PATH, where the <CUDA_LIBRARY_PATH> variable is the full path to the CUDA library.
    For CUDA 7.5, the <CUDA_LIBRARY_PATH> variable is /usr/local/cuda-7.5/lib64, which assumes CUDA is installed to the default directory.
    Note: If you are using Just-In-Time Compiler (JIT) based GPU support, you must also include a path to the NVIDIA Virtual Machine (NVVM) library. For example, the <CUDA_LIBRARY_PATH> variable is /usr/local/cuda-7.5/lib64:<NVVM_LIBRARY_PATH>.
    • On Linux x86-64 systems, the <NVVM_LIBRARY_PATH> variable is /usr/local/cuda-7.5/nvvm/lib64.
    • On IBM® Power® 8 systems, the <NVVM_LIBRARY_PATH> variable is /usr/local/cuda-7.5/nvvm/lib.
    These paths assume that the NVVM library is installed to the default directory.
  2. Windows only: Set the PATH to include the CUDA library. Open the System icon in the Control Panel. Click on Advanced system settings, then Environment Variables. Select the PATH variable and click Edit. Append the following string to the field Variable value: <CUDA_LIBRARY_PATH>, where the <CUDA_LIBRARY_PATH> variable is the full path to the CUDA library. Ensure that multiple PATH values are separated by a ;.
    The <CUDA_LIBRARY_PATH> variable is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\bin, which assumes CUDA is installed to the default directory.
    You can also set this environment variable directly on the command line with the following command set PATH=<CUDA_LIBRARY_PATH>;%PATH%
    Note: If you are using Just-In-Time Compiler (JIT) based GPU support, you must also include paths to the NVIDIA Virtual Machine (NVVM) library, and to the NVDIA Management Library (NVML). For example, the <CUDA_LIBRARY_PATH> variable is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\bin;<NVVM_LIBRARY_PATH>;<NVML_LIBRARY_PATH>. If the NVVM library is installed to the default directory, the <NVVM_LIBRARY_PATH> variable is C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\nvvm\bin. You can find the NVML library in your NVIDIA drivers directory. The default location of this directory is C:\Program Files\NVIDIA Corporation\NVSMI.
  3. If you want the VM to determine when to move suitable workloads to the GPU, follow these steps:
    1. Set the -Dcom.ibm.gpu.enable system property on the command line when you run your application.
      This property can be set for specific processing functions, such as sort. For more information, see -Dcom.ibm.gpu.enable (Linux, Windows only).
    2. Optional: If you have more than one GPU installed on your system and you want your application to target a specific GPU, you can set the CUDA environment variable CUDA_VISIBLE_DEVICES.
      For example, setting CUDA_VISIBLE_DEVICES=1 causes only NVIDIA device identifier 1 to be visible to the application.
      For more information about this variable see CUDA environment variables on the NVIDIA website.
  4. To enable the JIT compiler to offload processing to a GPU, set the following option when you start your application: -Xjit:enableGPU.

Results

If the -Dcom.ibm.gpu.enable system property is set correctly, the processing tasks that are specified with the system property are automatically offloaded to the GPU when they meet a minimum workload size.

If you have set the -Xjit:enableGPU option, the JIT uses performance heuristics to determine which workloads to send to the GPU for processing.

If you experience problems, see GPU problem determination (Linux, Windows only).