NGC | Catalog

TensorRT

Logo for TensorRT
Features
Description
NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network and produces a highly optimized runtime engine that performs inference for that network.
Publisher
NVIDIA
Latest Tag
24.03-py3
Modified
March 27, 2024
Compressed Size
3.92 GB
Multinode Support
No
Multi-Arch Support
Yes
24.03-py3 (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

Linux / amd64

Sorry, your browser does not support inline SVG.

What Is TensorRT?

The core of NVIDIA TensorRT is a C++ library that facilitates high-performance inference on NVIDIA graphics processing units (GPUs). TensorRT takes a trained network, which consists of a network definition and a set of trained parameters, and produces a highly optimized runtime engine that performs inference for that network.

You can describe a TensorRT network using a C++ or Python API, or you can import an existing Caffe, ONNX, or TensorFlow model using one of the provided parsers.

TensorRT provides APIs via C++ and Python that help to express deep learning models via the Network Definition API or load a pre-defined model via the parsers that allows TensorRT to optimize and run them on a NVIDIA GPU. TensorRT applies graph optimizations, layer fusion, among other optimizations, while also finding the fastest implementation of that model leveraging a diverse collection of highly optimized kernels. TensorRT also supplies a runtime that you can use to execute this network on all of NVIDIA's GPUs from the Kepler generation onwards.

TensorRT also includes optional high speed mixed precision capabilities introduced in the Tegra X1, and extended with the Pascal, Volta, and Turing architectures.

Need enterprise support? NVIDIA global support is available for TensorRT with the NVIDIA AI Enterprise software suite. Check out NVIDIA LaunchPad for free access to a set of hands-on labs with TensorRT hosted on NVIDIA infrastructure.

Join the TensorRT and Triton community and stay current on the latest product updates, bug fixes, content, best practices, and more.

Framework Integrations

TensorRT is also integrated directly into PyTorch and TensorFlow. Torch-TensorRT and TensorFlow-TensorRT allow users to go directly from any trained model to a TensorRT optimized engine in just one line of code, all without leaving the framework. More information on integrations can be found on the TensorRT Product Page.

To use the framework integrations, please run their respective framework containers: PyTorch, TensorFlow.

Running TensorRT

Before you can run an NGC deep learning framework container, your Docker environment must support NVIDIA GPUs. To run a container, issue the appropriate command as explained in the Running A Container chapter in the NVIDIA Containers And Frameworks User Guide and specify the registry, repository, and tags. For more information about using NGC, refer to the NGC Container User Guide.

The method implemented in your system depends on the DGX OS version installed (for DGX systems), the specific NGC Cloud Image provided by a Cloud Service Provider, or the software that you have installed in preparation for running NGC containers on TITAN PCs, Quadro PCs, or vGPUs.

Procedure

  1. Select the Tags tab and locate the container image release that you want to run.

  2. In the Pull Tag column, click the icon to copy the docker pull command.

  3. Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.

  4. Run the container image.

    If you have Docker 19.03 or later, a typical command to launch the container is:

docker run --gpus all -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorrt:xx.xx-py3
If you have Docker 19.02 or earlier, a typical command to launch the container is:
nvidia-docker run -it --rm -v local_dir:container_dir nvcr.io/nvidia/tensorrt:xx.xx-py3

Where:

  • -it means interactive
  • --rm will delete the container when finished
  • xx.xx is the container version. For example, 20.01.
  1. You can build and run the TensorRT C++ samples from within the image. For details on how to run each sample, see the TensorRT Developer Guide.
cd /workspace/tensorrt/samples
make -j4
cd /workspace/tensorrt/bin
./sample_mnist
  1. You can also execute the TensorRT Python samples.
cd /workspace/tensorrt/samples/python/introductory_parser_samples
python caffe_resnet50.py -d /workspace/tensorrt/python/data
  1. See /workspace/README.md inside the container for information on customizing your image.

Python Dependencies

In order to save space, some of the dependencies of the Python samples have not been pre-installed in the container. To install these dependencies, run the following command before you run these samples:

/opt/tensorrt/python/python_setup.sh

Suggested Reading

For the latest TensorRT container Release Notes see the TensorRT Container Release Notes website.

For a full list of the supported software and specific versions that come packaged with this framework based on the container image, see the Frameworks Support Matrix.

Link to Open Source Code

For the latest TensorRT product Release Notes, Developer and Installation Guides, see the TensorRT Product Documentation website.

Security CVEs

To review known CVEs on the 21.07 image, please refer to the Known Issues section of the Product Release Notes.

License

By pulling and using the container, you accept the terms and conditions of this End User License Agreement.