"Intel® processors offer the high performance, reliability, and stability that a2 requires for this mission-critical solution. We are also more than happy to join the Intel partner ecosystem and enjoy the technical and business support associated with the Intel brand."
"Our solution is designed to employ Intel®-optimized machine-learning hardware and software technologies to train, test, and operationalize a model to help detect COVID-19 and 14 other thoracic diseases using chest scans. We used the Intel® DevCloud and Intel® Distribution of OpenVINO™ toolkit to optimize and deploy our machine-learning models across multiple Intel® platforms. As a result, our team was able to accelerate prototyping and deployment at lower costs on the best performing Intel® architecture for our solution."
"Because we serve customers with so many different needs, it’s important to quickly achieve the right balance of price and performance for each of our applications. Intel DevCloud lets us test multiple platforms in parallet. That’s a lot of time savings—and time is money—so it’s a no brainer."
With development support from Intel, Switzerland’s Distance University of Applied Sciences deployed an AI-enhanced proctoring solution, optimized with the Intel Distribution of OpenVINO toolkit and tested on Intel DevCloud.
"Because developers can quickly evaluate the performance of their applications in multiple edge computing systems by using Intel DevCloud, they can not only shorten the inspection time to go to market, they can also expect tremendous benefits in terms of investment and maintenance in verification equipment. We are confident that Intel DevCloud will accelerate and streamline operations and create new value for more IoT businesses and for more customers."
Tomohiro Nagao, senior manager, Healthcare Business Unit
“Our new approach uses dynamic loading of monolingual models to achieve speech recognition with high performance across languages and accents. Our hope with this project is to build speech technology that enables developers to extend the benefits of speech-related AI to emerging markets for the post-COVID world.”
“As a member of the Intel® AI: In Production Program, we use Intel DevCloud and Intel Distribution of OpenVINO toolkit extensively, where CPU-run engines that are optimized with the latter are currently in production. So far, we have used DevCloud primarily for benchmarking purposes, which enabled us to determine the optimal edge hardware configurations, algorithmic choices and engine parameters, production-level load estimations and scalability assessment under different production scenarios. With our solution, the results obtained from the exercise in DevCloud show that computing deep neural networks on CPU has become on par with GPU in terms of performance, and it is constantly improving with newer tools like int8 quantization, making advanced edge inference solutions like ours feasible with CPU. Using OpenVINO toolkit, the per image inference time speeds up by 10x as compared to unoptimized CPU execution, which allows us to run our solution as a real-time IoT application operating on Intel® Xeon™ processors at multiple stores.”
“Intel DevCloud provided confidence that our solution will be able to accommodate the number of cameras that customers have in their environment, and it helped remove guesswork from the pilot process.”
“WonderStore uses Intel DevCloud to train models with data sets of over 30,000 pictures in half the time vs. on-premise servers. Intel DevCloud allows us to choose our hardware configuration to optimize training, enabling us to create vertical CV models for each customer’s needs.”