Deep Learning Inference

After a neural network is trained, it is deployed to run inference—to classify, recognize, and process new inputs. Develop and deploy your application quickly with the lowest deterministic latency on a real-time performance platform. Simplify the acceleration of convolutional neural networks (CNN) for applications in the data center and at the edge.