To use GPUs, you need 3 thigns

  • a GPU enabled instance (any one of our T4 instances will work)
  • a docker image based off of nvidia cuda docker images (if you're building an image in Saturn, saturnbase-gpu is the one that you want. It is based off of nvidia/cuda:10.1-devel-ubuntu18.04
  • GPU enabled libraries

The last point is a bit tricky, since it depends on the library you're using.


For tensorflow, you need GPU builds. In Conda, I execute conda search tensorflow and conda search tensorflow-base and look for conda packages with build numbers that indicate that they are GPU enabled (and for the right version of Python). I then add then to an environment.yml

- tensorflow=2.1.0=gpu_py37h7a4bb67_0
- tensorflow-base=2.1.0=gpu_py37h6c5654b_0


Same thing, except the packages are in the pytorch conda channel, and are typically tagged with the appropriate cuda version. We are using cuda 10.1. Execute conda search -c pytorch pytorch , and possibly conda search -c pytorch torchvision

- pytorch=1.4.0=py3.7_cuda10.1.243_cudnn7.6.3_0
- torchvision=0.5.0=py37_cu101

Did this answer your question?