The resources needed to support inferencing on deep neural networks can be substantial. These operational needs typically drive organizations to update their hardware. However, investing in single-purpose hardware for inferencing can leave you exposed if your computational needs change before your expected refresh. High performance and speed for AI inferencing, coupled with the ﬂexibility of the ...Intel® hardware that your IT department is already familiar with, can help protect your IT investments. The Intel® Select Solution for AI inferencing is a "turnkey platform" solution for low-latency, high-throughput inference performed on a CPU, not a separate accelerator card. It provides you with a jumpstart to deploying efficient AI inferencing algorithms on a solution composed of validated Intel® architecture building blocks that you can innovate on and take to market. To do so, this solution makes use of a feature of 2nd Generation Intel® Xeon® Scalable processors, Intel® Deep Learning Boost (Intel® DL Boost), which accelerates AI inference by performing in one instruction inferencing calculations that previously took multiple instructions.