A preview is not available for this record, please engage by choosing from the available options ‘download’ or ‘view’ to engage with the material
Description
Simplify AI inference and enable its execution on GPUs, VPUs, and FPGAs using the Model Server from Intel® Distribution of OpenVINO™ toolkit.