NGC | Catalog
CatalogModelsTAO Pretrained Object Detection

TAO Pretrained Object Detection

Logo for TAO Pretrained Object Detection
Description
Pretrained weights to facilitate transfer learning using TAO Toolkit.
Publisher
NVIDIA
Latest Version
cspdarknet_tiny
Modified
December 11, 2023
Size
28.57 MB

What is Train Adapt Optimize (TAO) Toolkit?

Train Adapt Optimize (TAO) Toolkit is a python based AI toolkit for taking purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune and export highly optimized and accurate AI models for edge deployment. This model is ready for commercial use.

The pre-trained models accelerate the AI training process and reduce costs associated with large scale data collection, labeling, and training models from scratch. Transfer learning with pre-trained models can be used for AI applications in smart cities, retail, healthcare, industrial inspection and more.

Build end-to-end services and solutions for transforming pixels and sensor data to actionable insights using TAO, DeepStream SDK and TensorRT. The models are suitable for object detection and classification.

Object Detection Using TAO

Object detection is a popular computer vision technique that can detect one or multiple objects in a frame. Object detection will recognize the individual objects in an image and places bounding boxes around the object. This model card contains pretrained weights that may be used as a starting point with the following object detection networks in Train Adapt Optimize (TAO) Toolkit to facilitate transfer learning.

  • YOLOV3
  • YOLOV4
  • YOLOV4-Tiny
  • FasterRCNN
  • SSD
  • DSSD
  • RetinaNet

It is trained on a subset of the Google OpenImages dataset. Following backbones are supported with these detection networks.

  • resnet10/resnet18/resnet34/resnet50/resnet101
  • vgg16/vgg19
  • googlenet
  • mobilenet_v1/mobilenet_v2
  • squeezenet
  • darknet19/darknet53
  • efficientnet_b0
  • cspdarknet19/cspdarknet53
  • cspdarknet-tiny

Some combinations might not be supported. See the matrix below for all supported combinations.

LOGO

To see the full list of all the backbones, scroll over to the version history tab.

Note: These are unpruned models with just the feature extractor weights, and may not be used without re-training in an object detection application

Note: The ResNet101 model is currently only supported for FasterRCNN currently. Please make sure to turn set the all_projections field to False in the spec file when training a ResNet101 model. For more information about this parameter please refer to the TAO Getting Started Guide.

Note: The pre-trained weights in this model are only for the detection networks above and shouldn't be used for DetectNet_v2 based object detection models. For pre-trained weights with DetectNet_v2, click here

Running Object Detection Models Using TAO

The object detection apps in TAO expect data in KITTI file format. TAO provides a simple command line interface to train a deep learning model for object detection.

The models in this model area are only compatible with TAO Toolkit. For more information about the TAO container, please visit the TAO container page.

Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.

  1. Install the NGC CLI from ngc.nvidia.com

  2. To view all the backbones that are supported by object detection architecture in TAO:

ngc registry model list nvidia/tao_pretrained_object_detection:*
  1. Download the model:
ngc registry model download-version nvidia/tao_pretrained_object_detection:<template> --dest <path>

Instructions to run the sample notebook

  1. Get the NGC API key from the SETUP tab on the left. Please store this key for future use. Detailed instructions can be found here

  2. Configure the NGC command line interface using the command mentioned below and follow the prompts.

ngc config set
  1. Download the sample notebooks from NGC using the command below
ngc registry resource download-version "nvidia/tao_cv_samples:v1.0.2"
  1. Invoke the jupyter notebook using the following command
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root
  1. Open an internet browser and type in the following URL to start running the notebooks when running on a local machine.
http://0.0.0.0:8888

If you wish to run view the notebook from a remote client, please modify the URL as follows:

http://a.b.c.d:8888

Where, the a.b.c.d is the IP address of the machine running the container.

Other TAO Pre-trained Models

License

This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, please visit this link, or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

Technical blogs

Suggested reading

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards here. Please report security vulnerabilities or NVIDIA AI Concerns here.

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.