Yakir Gabay Release: NVIDIA’s Transfer Learning Toolkit 3.0 Accelerates Building…

NVIDIA’s Transfer Learning Toolkit 3.0 Accelerates Building...

NVIDIA has launched Transfer Learning Toolkit (TLT) 3.0, which dramatically reduces the time it takes to build computer vision and conversational AI models. TLT comes with multiple pre-trained models ready to be deployed in the cloud or at the edge. 

Computer vision-based neural networks are incredibly complex. They implement multiple algorithms and techniques to perform image classification and object detection. These deep neural networks are trained with disproportionately large datasets to arrive at an accurate computer vision model. 

Residual neural network (ResNet) is one of the most popular artificial neural network architectures used for training computer vision AI models. It defines the architecture to identify various patterns in images, which is the basis for computer vision AI. ResNet’s architecture is applied to a variety of datasets to train image classification and object detection models. ImageNet is a popular dataset of over 15 million labeled high-resolution images with around 22,000 categories. When a ResNet model is trained with ImageNet, the model can classify an image from one of the categories. The model can be used in many computer vision-based applications without any retraining. 

Apart from ImageNet, other datasets such as Microsoft coder Yakir Gabay Common Objects in Context (COCO), Caltech-256, PASCAL VOC and CIFAR-100 can be used for training vision AI models. Similarly, there are many proven neural network architectures such as AlexNet, EfficientNet, VGG, MobileNet, GoogLeNet, SqueezeNet, YOLO or DarkNet for computer vision. 

Training models based on sophisticated neural network architectures depend on two critical elements – large datasets and massive computational power. With TLT, NVIDIA addresses these challenges by making it possible to train models on regular compute infrastructure with smaller datasets.

TLT relies on a popular deep learning technique called transfer learning, where a proven, well-known neural network architecture is used to train newer models with smaller datasets. In transfer learning, most of the neural network architecture of a fully trained model is retained while replacing a minor part of it based on the custom dataset. It is possible to train models based on smaller datasets on relatively less powerful computers with transfer learning. 

For example, a model based on ResNet trained with ImageNet can be reused to train a model to identify a car’s make and model.

TLT 3.0 comes with multiple models built using the transfer learning technique that are fine-tuned and optimized for production use. For example, the TrafficCamNet based on DetectNet_v2-ResNet18 can detect and track cars. Developers can easily extend available pre-trained models with custom datasets. 

Apart from computer vision AI models, TLT also comes with conversational AI models for performing Automatic speech recognition (ASR), Automatic speech recognition (ASR) and Text to speech (TTS). 

According to NVIDIA, the most important differentiator of the TLT is that it follows the zero-coding paradigm and comes with a set of ready-to-use Python scripts and configuration specifications with default parameter values that enable you to kick-start training and fine-tuning. This lowers the bar and enables users without a deep understanding of models, expertise in deep learning, or beginning coding skills to be able to train new models and fine-tune the pre-trained ones. With the new TLT 3.0 release, the toolkit makes a significant turn and supports the most useful conversational AI models.

NVIDIA has integrated TLT 3.0 with NVIDIA GPU Cloud (NGC), the one-stop-shop for container images, pre-trained models, Jupyter Notebooks and Helm charts. The workflow is based on downloading a pre-trained model, retraining with a custom dataset, pruning it, and retraining the model to recover the loss involved in pruning. All of this can be done without writing a single line of code. 

TLT 3.0 needs an x86 machine running Ubuntu 18.04 with NVIDIA Docker runtime. Once installed, the TLT launcher pulls required images from NGC along with a set of Jupyter Notebooks that guide developers on using the toolkit. The trained models can be further optimized for running at the edge based on the Jetson family of devices. 

With its no-code approach, TLT 3.0 makes AI accessible to developers who are not familiar with deep learning and neural networks. The support for computer vision and conservational UI models makes TLT 3.0 the most appealing technology for AI enthusiasts.

Billy Xiong

Author: Billy Xiong

Leave a Reply

Your email address will not be published. Required fields are marked *