NVIDIA® GPU Cloud (NGC) Catalog CLI

This article is for the NGC Catalog CLI that explains how to use the CLI.

Introduction

The NVIDIA® GPU Cloud (NGC) Catalog CLI is a command-line interface for managing content within the NGC Registry. The CLI operates within a shell and lets you use scripts to automate commands.With NGC Catalog CLI, you can

  • View a list of GPU-accelerated Docker container images, pre-trained deep-learning models, and scripts for creating deep-learning models.

  • Download models and model-scripts.

Note

Currently, the NGC Catalog CLI doesn’t not provide the ability to download container images. To download container images, use the docker pull command from the Docker command line.

This document provides an introduction to using the NGC Catalog CLI. For a complete list of commands and options, use the -h option as explained in Using NGC CLI.

Note

Currently NGC CLI works only with Ubuntu-18 for other OS please refer our NGC GUI Documentation

To Download Content Within The NGC Registry

The content within the NGC registry is either locked or unlocked. Unlocked content is freely available for download by guest users. To download locked content you must sign up for an NGC community user account.

Guest Users

Guest users can access the NGC website without having to log in. From the website, guest users can download the NGC Catalog CLI and start using it to view content and download unlocked content.

Community Users

To be a community user and download locked NGC content, you must sign up for an NGC account, sign into the NGC website with your account, and then generate an API key. See the NVIDIA GPU Cloud Getting Started Guide for instructions.

Using NGC Catalog CLI

To run an NGC CLI command, enter “ngc” followed by the appropriate options.

To see a description of available options and command descriptions, use the option -h after any command or option.

Example 1: To view a list of all the available options for ngc, enter

root@localhost:~# ngc -h
usage: ngc [--debug] [--format_type] [-h] [-v] {config,diag,registry} …
NVIDIA NGC Catalog CLI
optional arguments:
-h, --help            show this help message and exit
-v, --version         show the CLI version and exit.
--debug               Enables debug mode.
--format_type         Change output format type. Options: ascii, csv, json.
ngc:
{config,diag,registry}
config              Configuration Commands
diag                Diagnostic commands
registry            Registry Commands

Example 2: To view a description of the registry image command and options, enter

root@localhost:~# ngc registry image -h
 usage: ngc registry image [--debug] [--format_type] [-h] {info,list} …
 Container Image Registry Commands
 optional arguments:
 -h, --help      show this help message and exit
 --debug         Enables debug mode.
 --format_type   Change output format type. Options: ascii, csv, json.
 image:
 {info,list}
 info          Display information about an image repository or tagged
 image.
 list          Lists container images accessible by the user

Example 3: To view a description of the registry image info command and options, enter

root@localhost:~# ngc registry image info -h
usage: ngc registry image info [--debug] [--details] [--format_type]
                        [--history] [--layers] [-h]

[:]
Display information about an image repository or tagged image.
positional arguments:

[:]  Name of the image repository or tagged image,

[:]
optional arguments:
-h, --help       show this help message and exit
--debug          Enables debug mode.
--details        Show the details of an image repository
--format_type    Change output format type. Options: ascii, csv, json.
--history        Show the history of a tagged image
--layers         Show the layers of a tagged image

Preparing to Download Locked Content

If you plan to download locked content, be sure you have registered for an NGC account and have generated an API key, then issue the following and enter your API key at the prompt.

root@localhost:~# ngc config set
Enter API key [no-apikey]. Choices: [, 'no-apikey']:<your-api-key>

Accessing the Container Registry

The ngc registry image commands let you access ready-to-use GPU-accelerated container images from the registry.

Viewing Container Image Information

There are several commands for viewing information about available container images.

To list container images:

root@localhost:~# ngc registry image list

Example output

| TensorFlow            | nvidia/tensorflow      | 19.10-py3              | 3.39 GB    | Oct 28, 2019 |        unlocked   |
| TensorRT              | nvidia/tensorrt        | 19.10-py3              | 2.22 GB    | Oct 28, 2019 |        unlocked   |
| TensorRT Inference    | nvidia/tensorrtserver  | 19.10-py3              | 2.76 GB    | Oct 28, 2019 |        unlocked   |
| Server                |                        |                        |            |              |            |
| Theano                | nvidia/theano          | 18.08                  | 1.49 GB    | Oct 18, 2019 |        unlocked   |
| Transfer Learning     | nvidia/tlt-            | v1.0_py2               | 3.99 GB    | Oct 21, 2019 |        unlocked   |
| Toolkit for Video     | streamanalytics        |                        |            |               |            |
| Streaming Analytics   |                        |                        |            |              |            |
| Torch                 | nvidia/torch           | 18.08-py2              | 1.24 GB    | Oct 18, 2019 | unlocked   |
| DeepStream -          | nvidia/video-          | latest                 | 2.52 GB    | Oct 20, 2019 | unlocked   |
| Intelligent Video     | analytics-demo         |                        |            |              |            |
| Analytics Demo        |                        |                        |            |              |            |
| Chainer               | partners/chainer       | 4.0.0b1                | 963.75 MB  | Oct 18, 2019 | locked     |
| Deep Cognition Studio | partners/deep-         | cuda9-2.5.1            | 2.05 GB    | Oct 18, 2019 | locked     |
|                       | learning-studio        |                        |            |              |            |
| DeepVision -          | partners/deepvision/ad | onpremise-1.0.1        | 240.24 MB  | Oct 21, 2019 | locked     |
| admin.console         | min.console            |                        |            |              |            |
| DeepVision -          | partners/deepvision/ad | onpremise-1.0.1        | 753.95 KB  | Oct 21, 2019 | locked     |
| admin.console.data    | min.console.data       |                        |            |              |            |
| DeepVision -          | partners/deepvision/vf | onpremise-2.0.0        | 3.29 GB    | Oct 21, 2019 | locked     |
| Demographics          | .demographics          |                        |            |              |            |

To view detailed information about a specific image, specify the image and the tag.

Example:

root@localhost:~# ngc registry image info nvidia/tensorflow:19.10-py3
 Image Information
     Name: nvidia/tensorflow:19.10-py3
     Architecture: amd64
     Schema Version: 1

Accessing the Model Registry

The ngc registry model commands let you access ready-to-use deep learning models from the registry.

Viewing Model Information

There are several commands for viewing information about available models.

To see a list of models that are provided by NVIDIA:

Example output

+-----------------+-----------------+----------------+-----------------+--------------+-----------+---------------+------------+
 | Name            | Repository      | Latest Version | Application     | Framework    | Precision | Last Modified | Permission |
 +-----------------+-----------------+----------------+-----------------+--------------+-----------+---------------+------------+
 | BERT-Large      | nvidia/bert_for | 1              | Language        | TensorFlow   | FP16      | Oct 18, 2019  | unlocked   |
 | (pre-training)  | tensorflow      |                | Modelling       |              |           |               |            | | for TensorFlow  |                 |                |                 |              |           |               |            | | BERT-Large(pre- | nvidia/bert_tf | 1              | Language        | Tensorflow   | FP16      | Oct 19, 2019  | unlocked   |
 | training using  | pretraining_lam |                | Modelling       |              |           |               |            |
 | LAMB optimizer) | b_16n           |                |                 |              |           |               |            |
 | for TensorFlow  |                 |                |                 |              |           |               |            |
 | BERT-Base(fine- | nvidia/bert_tf_ | 2              | Language        | Tensorflow   | FP16      | Oct 18, 2019  | unlocked   |
 | tuning) - SQuAD | v1_1_base_fp16_ |                | Modelling       |              |           |               |            |
 | 1.1, seqLen=128 | 128             |                |                 |              |           |               |            |
 | BERT-Base(fine- | nvidia/bert_tf_ | 2              | Language        | Tensorflow   | FP16      | Oct 18, 2019  | unlocked   |
 | tuning) - SQuAD | v1_1_base_fp16_ |                | Modelling       |              |           |               |            |

To view all versions of a model, use the wildcard *.

root@localhost:~# ngc registry model list nvidia/bert_for_tensorflow:*
 +---------+----------+---------+------------+-----------+-----------+-----------+--------+--------------+--------------+
 | Version | Accuracy | Epochs  | Batch Size | GPU Model | Memory    | File Size | Owner  | Status       | Created Date |
 |         |          |         |            |           | Footprint |           |        |              |              |
 +---------+----------+---------+------------+-----------+-----------+-----------+--------+--------------+--------------+
 | 1       |          | 1000000 | 256        | V100      | 4011      | 3.77 GB   | NVIDIA | UPLOAD_COMPL | Jun 13, 2019 |
 |         |          |         |            |           |           |           |        | ETE          |              |
 +---------+----------+---------+------------+-----------+-----------+-----------+--------+--------------+--------------+

To view detailed information about a model, you can specify the model

root@localhost:~# ngc registry model info nvidia/bert_for_tensorflow
 Model Information
     Name: bert_for_tensorflow
     Application: Language Modelling
     Framework: TensorFlow
     Model Format: TF ckpt
     Precision: FP16
     Description:
         # BERT Large(pre-training) for TensorFlow

or the model version.

root@localhost:~# ngc registry model info nvidia/bert_for_tensorflow:1
 Model Version Information
     Id: 1
     Batch Size: 256
     Memory Footprint: 4011
     Number Of Epochs: 1000000
     Accuracy Reached:
     GPU Model: V100
     Owner Name: NVIDIA
     Created Date: 2019-06-13T22:50:06.405Z
     Description:
     Pretrained weights for the BERT (pre-training) model.
     Status: UPLOAD_COMPLETE
     Total File Count: 3
     Total Size: 3.77 GB

Downloading a Model

To download a model from the registry to your local disk, specify the model name and version.

root@localhost:~# ngc registry model download-version nvidia/<model-name:version>

Example: Downloading a model to the current directory.

root@localhost:~# ngc registry model download-version nvidia/bert_for_tensorflow:1
 Downloaded 3.46 GB in 6m 22s, Download speed: 9.26 MB/s
 Transfer id: bert_for_tensorflow_v1 Download status: Completed.
 Downloaded local path: /root/bert_for_tensorflow_v1
 Total files downloaded: 3
 Total downloaded size: 3.46 GB
 Started at: 2019-10-30 18:14:23.667980
 Completed at: 2019-10-30 18:20:46.313870
 Duration taken: 6m 22s seconds

The model is downloaded to a folder that corresponds to the model name in the current directory. You can specify another path using the -d . option.

root@localhost:~# ngc registry model download-version nvidia/bert_for_tensorflow:1 -d ./models

Viewing Model-script Information

There are several commands for viewing information about available model-scripts.

To see a list of model-scripts that are provided by NVIDIA:

root@localhost:~# ngc registry model-script list

+-----------------+-----------------+----------------+-----------------+------------+-----------+---------------+------------+

| Name            | Registry        | Latest Version | Application     | Framework  | Precision | Last Modified | Permission |

+-----------------+-----------------+----------------+-----------------+------------+-----------+---------------+------------+

| BERT for        | nvidia/bert_for | 3              | NLP             | PyTorch    | FPBOTH    | Oct 19, 2019  | unlocked   |

| PyTorch         | _pytorch        |                |                 |            |           |               |            |

| BERT for        | nvidia/bert_for | 4              | NLP             | TensorFlow | FPBOTH    | Oct 21, 2019  | unlocked   |

| TensorFlow      | _tensorflow     |                |                 |            |           |               |            |

| Clara Deploy    | nvidia/clara_de | 4              | SEGMENTATION    | TensorFlow | FPBOTH    | Oct 21, 2019  | unlocked   |

| SDK             | ploy_sdk        |                |                 |            |           |               |            |

| Clara AI        | nvidia/clara_tr | 1              | KUBEFLOW_PIPELI | TensorFlow | FP32      | Oct 19, 2019  | locked     |

| Medical Imaging | ain             |                | NE              |            |           |               |            |

To view detailed information about a model-script, you can specify

the model-script

root@localhost:~# ngc registry model-script info nvidia/bert_for_pytorch
 model-script Information
     Name: bert_for_pytorch
     Application: NLP
     Training Framework: PyTorch
     Model Format: PyTorch PTH
     Precision: FP16, FP32

or the model-script version.

root@localhost:~# ngc registry model-script info nvidia/bert_for_pytorch:3
 model_script Version Information
     Id: 3
     Batch Size: 0
     Memory Footprint: 0
     Number Of Epochs: 0
     Accuracy Reached: 0.0
     GPU Model: V100

Downloading a Model-script

To download a model-script from the registry to your local disk, specify the model-script name and version.

root@localhost:~# ngc registry model-script download-version nvidia/<model-script-name:version>

Example: Downloading a model to the current directory.

The following is an example showing the output confirming completion of the download:

root@localhost:~# ngc registry model-script download-version nvidia/bert_for_pytorch:1
 Downloaded 275.69 KB in 6s, Download speed: 45.87 KB/s
 Transfer id: bert_for_pytorch_v1 Download status: Completed.
 Downloaded local path: /root/bert_for_pytorch_v1
 Total files downloaded: 49
 Total downloaded size: 275.69 KB
 Started at: 2019-10-30 18:34:24.956435
 Completed at: 2019-10-30 18:34:30.970395
 Duration taken: 6s seconds

The model is downloaded to a folder that corresponds to the model name in the current directory. You can specify another path using the -d . option.

Example: Downloading a mode-script to a specific directory (/model-scripts).

root@localhost:~# ngc registry model-script download-version nvidia/bert_for_pytorch:1 -d ./model-scripts