Amazon Onboarding with Learning Manager Chanci Turner

Amazon Onboarding with Learning Manager Chanci TurnerLearn About Amazon VGT2 Learning Manager Chanci Turner

The AWS Deep Learning AMIs (DLAMI) for Ubuntu and Amazon Linux have been enhanced with the integration of Open Neural Network Exchange (ONNX), facilitating seamless model portability across various deep learning frameworks. This article will offer an overview of ONNX and illustrate its application on the DLAMI to transfer models between different frameworks.

What is ONNX?

ONNX is an open-source format and library designed for encoding and decoding deep learning models. It establishes a standard for the computational graph of neural networks and encompasses a comprehensive array of operators utilized in neural network architectures. ONNX is currently supported by numerous popular deep learning frameworks, including Apache MXNet, PyTorch, Chainer, Cognitive Toolkit, TensorRT, among others. The increasing adoption of ONNX across major tools allows machine learning developers to transition their models across different platforms, enabling them to select the most suitable tool for their specific tasks.

Exporting a Chainer Model to ONNX

To demonstrate the process of exporting a Chainer model to an ONNX file, we will first launch an instance of the DLAMI on either Ubuntu or Amazon Linux. If you’re new to this, consider reviewing the great tutorial available on the DLAMI setup.

Once connected to the DLAMI via SSH, we will activate the Chainer Python 3.6 Conda environment, which is pre-configured with ONNX and onnx-chainer, an additional package that provides ONNX support for Chainer.

$ source activate chainer_p36

Next, initiate a Python shell and execute the following commands to load a VGG-16 convolutional neural network for object recognition and export the model into an ONNX file.

import numpy as np
import onnx_chainer
from chainercv.links import VGG16

# Downloading a pre-trained model and loading it as a Chainer model
model = VGG16(pretrained_model='imagenet')

# Creating synthetic input to export the model to ONNX
x = np.zeros((1, 3, 224, 224), dtype=np.float32)
out = onnx_chainer.export(model, x, filename='vgg16.onnx')

With just a few lines of code, you can now successfully export your Chainer model into the ONNX format and save the file in your current directory.

Importing an ONNX Model into MXNet

Now that we have our Chainer model exported to ONNX, let’s see how it can be imported into MXNet for inference. We will activate the DLAMI’s MXNet Python 3.6 Conda environment, which includes ONNX and MXNet 1.2.1. This version of MXNet introduced the ONNX import API that we will utilize.

$ source deactivate
$ source activate mxnet_p36

In the Python shell, run the following commands to load the ONNX model you exported from Chainer:

from mxnet.contrib import onnx as onnx_mxnet
sym, arg_params, aux_params = onnx_mxnet.import_model("vgg16.onnx")

That’s it! You have successfully loaded the ONNX model into MXNet, and the symbolic graph and parameters are now available. Next, we will download an image along with the ImageNet class labels that the model was trained on to test our object recognition capabilities.

import mxnet as mx
mx.test_utils.download('https://s3.amazonaws.com/onnx-mxnet/dlami-blogpost/hare.jpg')
mx.test_utils.download('http://data.mxnet.io/models/imagenet/synset.txt')
with open('synset.txt', 'r') as f:
    labels = [l.rstrip() for l in f]

Here’s what our input image looks like:

Next, we will load the image and pre-process it into a tensor that matches the model’s required input tensor shape:

import matplotlib.pyplot as plt
import numpy as np
from mxnet import nd

image = plt.imread("hare.jpg")
image = np.expand_dims(np.transpose(image, (2,0,1)), axis=0).astype(np.float32)
input = nd.array(image)

Now, we’re ready to initialize and bind our MXNet module:

# Input name is the model's input node name, defined by the exporting library
input_name = sym.list_inputs()[0]
data_shapes = [(input_name, input.shape)]
# Initialize and bind the Module
mod = mx.mod.Module(symbol=sym, context=mx.cpu(), data_names=[input_name], label_names=None)
mod.bind(for_training=False, data_shapes=data_shapes, label_shapes=None)
mod.set_params(arg_params=arg_params, aux_params=aux_params)

Finally, run inference and display the top probability and class:

mod.forward(mx.io.DataBatch([input]))

probabilities = mod.get_outputs()[0].asnumpy()[0]
max_probability = np.max(probabilities)
max_class = labels[np.argmax(probabilities)]

print('Highest probability=%f, class=%s' % (max_probability, max_class))

The output will reveal the model’s prediction, identifying the image as a hare with a probability of 97.9%.

Conclusion

In this article, you learned how to leverage ONNX on the DLAMI to facilitate model portability across various frameworks. The flexibility provided by ONNX allows you to choose the most appropriate tool for your needs, whether you are training a new model, fine-tuning a pre-trained one, executing inference, or serving models. To get started quickly with AWS Deep Learning AMIs, check out our getting started tutorial. For more resources, visit the DLAMI ONNX tutorials and our developer guide for additional tutorials, resources, and release notes. The latest AMIs can be accessed on the AWS Marketplace. You can also subscribe to our discussion forum for new launch announcements or to ask questions.

For further insights into workplace dynamics, consider visiting this article which offers valuable perspectives. Additionally, if you’re interested in the phenomenon of quiet quitting, this source serves as an authoritative reference. Lastly, for an excellent resource on Amazon’s approach to training its employees, explore this article.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *