Nvidia-Docker - To verify bridge between Container & GPU

What is Nvidia-Docker ?

Simple. Bridging the Gap Between Containers and GPU

Nvidia created a runtime for Docker called Nvidia-Docker. The goal of this open source the project was to bring the ease and agility of containers to CUDA programming model.

../_images/gpu1.png

Since Docker didn’t support GPUs natively, this project instantly became a hit with the CUDA community. Nvidia-Docker is basically a wrapper around the docker CLI that transparently provisions a container with the necessary dependencies to execute code on the GPU. It is only necessary when using Nvidia-Docker run to execute a container that uses GPUs.

You can run Nvidia-Docker on Linux machines that have a GPU along with the required drivers installed.

All our GPU plans support are NVIDIA® CUDA-capable and cuDNNn with Nvidia-Docker installed.

How to verify docker container is able to access the GPU ?

After you create a GPU node, you’ll need to log into the server via SSH:

From a terminal on your local computer, connect to the server as root. Make sure to substitute the server’s IP address which received in your welcome email.

ssh root@use_your_server_ip

If you did not add an SSH key when you created the server, you’ll be getting your root password in your welcome mail.

Below are the commands to verify Nvidia-Docker on an Ubuntu 16.04 machine powered by a NVIDIA®.

# nvidia-docker run --rm nvidia/cuda nvidia-smi
../_images/gpu2.png

The nvidia-smi command runs the systems management interface (SMI) to confirm that the Docker container is able to access the GPU. Behind the scenes, SMI talks to the Nvidia driver to talk to the GPU.

We can also verify that CUDA is installed by running the below command.

# nvidia-docker run --rm nvidia/cuda nvcc -V
../_images/gpu3.png