Nvidia-Docker - To verify bridge between Container & GPU
What is Nvidia-Docker?
Simple. Bridging the Gap Between Containers and GPU
Nvidia created a runtime for Docker called Nvidia-Docker. The goal of this open-source project was to bring the ease and agility of containers to the CUDA programming model.
Since Docker didn’t support GPUs natively, this project instantly became a hit with the CUDA community. Nvidia-Docker is basically a wrapper around the Docker CLI that transparently provisions a container with the necessary dependencies to execute code on the GPU. It is only necessary when using nvidia-docker
to run a container that utilizes GPUs.
You can run Nvidia-Docker on Linux machines that have a GPU along with the required drivers installed.
All our GPU plans support NVIDIA® CUDA-capable and cuDNN with Nvidia-Docker installed.
How to verify docker container is able to access the GPU?
After creating a GPU node, you’ll need to log into the server via SSH:
From a terminal on your local computer, connect to the server as root. Make sure to substitute the server’s IP address that you received in your welcome email:
ssh root@use_your_server_ip
If you did not add an SSH key when you created the server, you’ll be getting your root password in your welcome mail.
Below are the commands to verify Nvidia-Docker on an Ubuntu 16.04 machine powered by a NVIDIA®.
nvidia-docker run --rm nvidia/cuda nvidia-smi
The nvidia-smi command runs the systems management interface (SMI) to confirm that the Docker container is able to access the GPU. Behind the scenes, SMI talks to the Nvidia driver to talk to the GPU.
We can also verify that CUDA is installed by running the below command.
nvidia-docker run --rm nvidia/cuda nvcc -V