Install a Helm Chart That Deploys a TensorFlow* Serving

ID 672066
Updated 12/16/2020
Version Latest
Public

author-image

By

Download Command

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/tensorflow-serving-0.1.0.tgz

Description

Prior to running this how-to, please ensure you have Helm 3 installed per instructions on this page:

https://helm.sh/docs/intro/install/

To install a Helm Chart that deploys TensorFlow* Serving, first download and extract the package and navigate into the resulting directory:

wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v2_2_0/tensorflow-serving-0.1.0.tgz
tar -xvf tensorflow-serving-0.1.0.tgz
cd tensorflow-serving

Then run:

# Note: Ensure a namespace with the following name exists:
export SERVING_NAMESPACE=<NAME_SPACE_TO_DEPLOY_CHART>   # tensorflow-serving
export CHART_NAME=<NAME_OF_DEPLOYED_CHART>   # resnet50v1-5

# Install a Serving chart
helm install \
     --namespace $SERVING_NAMESPACE \
     --debug $CHART_NAME . \
     --set service.internalPort=8500 \
     --set podSecurityContext.fsGroup=<YOUR_FS_GROUP> \
     --set podSecurityContext.runAsGroup=<YOUR_DESIRED_GROUP> \
     --set podSecurityContext.runAsUser=<YOUR_USER_ID> \
     --set model_base_path=<MODEL_BASE_PATH_ENV_VALUE> \
     --set model_name=<MODEL_NAME_ENV_VALUE> \
     --set models_path=<MODEL_LOCAL_PATH> \
     --set replicaCount=<SERVING_POD_REPLICAS>

This will return:

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace tensorflow-serving -l "app.kubernetes.io/name=tensorflow-serving,app.kubernetes.io/instance=resnet50v1-5" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace tensorflow-serving $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8501 to use your application"
  kubectl --namespace tensorflow-serving port-forward $POD_NAME 8501:$CONTAINER_PORT

Once you have run the above commands prompted by NOTES, in another shell, run the following:

# Clone TensorFlow Serving repo:
git clone https://github.com/tensorflow/serving.git
export TF_SERVING_ROOT=$(pwd)/serving

# Setup a virtual environment:
python3 -m virtualenv -p python3 .venv3
. .venv3/bin/activate
pip install requests

# Send rRPC request to your served model:
python $TF_SERVING_ROOT/tensorflow_serving/example/resnet_client.py

This should return:

Prediction class: 286, avg latency: 57.318000000000005 ms

Documentation and Sources

Get Started
Main GitHub*
Readme
Release Notes
Get Started Guide

Code Sources
Report Issue


License Agreement

LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the license file for additional details.