INSPUR (BEIJING) ELECTRONIC INFORMATIONINDUSTRY CO., LTD.
Offerings
Offering
In the past few years, organizations have seen a convergence of massive amounts of data with the compute power and large-capacity storage needed to process it all. The right infrastructure can provide modern businesses with new ways of harnessing data for innovative apps and services built on artificial intelligence (AI). The opportunities are nearly infinite and stretch across almost every field—from financial services to manufacturing to healthcare and beyond. But organizations with on-premises infrastructures or using hybrid cloud models face several challenges on the road to AI. They need to research, select, deploy, and optimize infrastructure that can provide efficient resource utilization while scaling on demand to meet changing business requirements. Beyond scalability, organizations seek easier ways to implement AI initiatives. Many businesses lack sufficient in-house expertise and infrastructure to get started with AI, particularly for deep learning (DL). The road to deploying DL in production environments is time-intensive and complex. Managing the data for AI initiatives can also be a challenge: organizations struggle to extract value from their “data swamps,” and it can be complex and resource-intensive to move data from on premises to the cloud for analytics. The Intel® Select Solution for BigDL on Apache Spark* can help businesses overcome these key challenges to achieve their AI initiatives faster and more easily. The pre-tested and tuned solution eliminates the need for organizations to research and manually optimize infrastructure to efficiently pursue their AI initiatives. The solution reduces the need for specialized in-house expertise to deploy and manage AI infrastructure. And it can help IT organizations improve infrastructure utilization, while ensuring scalability to meet the growing needs of their companies. BigDL Apache Spark helps solve the IT challenges of DL, data, and specialized expertise by providing for standardized big-data storage and compute, with scalability, by enabling the addition of hundreds of nodes without degrading performance and without changing the fundamental architecture. BigDL : a distributed DL library that augments the storage and compute capabilities of Apache Spark—provides efficient, scalable, and optimized DL development. BigDL enables the development of new DL models for training and serving on the same big data cluster. It also supports models from other frameworks, including TensorFlow*, Keras*, and others, so you can import other trained models into the BigDL framework or use BigDL trained models in other frameworks. BigDL is supported by Analytics Zoo, which provides a unified AI platform and pipeline with built-in reference use cases to further simplify your AI solutions development. BigDL is optimized for Intel®-based platforms with software libraries like Intel® Math Kernel Library (Intel® MKL) and Intel® Math Kernel Library for Deep Learning Networks (Intel® MKL-DNN) to increase computational performance. Other supporting software includes the Intel® Distribution for Python*, which accelerates popular machine learning libraries such as NumPy*, SciPy*, and scikit-learn* with integrated Intel® Performance Libraries such as Intel MKL and Intel® Data Analytics Acceleration Library (Intel® DAAL). On the hardware side, the Intel Select Solution for BigDL on Apache Spark uses Intel® Xeon® Scalable processors for high performance and Intel® Solid State Drives (SSDs) for better performance and improved reliability compared to traditional hard-disk drives (HDDs). The Intel Select Solution for BigDL on Apache Spark The Intel Select Solution for BigDL on Apache Spark helps optimize price/performance while significantly reducing infrastructure evaluation time. The Intel Select Solution for BigDL on Apache Spark combine Intel Xeon Scalable processors, Intel SSDs, and Intel® Ethernet Network Adapters to empower enterprises to quickly harness a reliable, comprehensive solution that delivers: § The ability to prepare your machine learning (ML)/DL infrastructure investments for the future with scalable storage and compute § Excellent total cost of ownership (TCO) with multi-purpose hardware that your IT organization is used to managing in a verified, tested solution that simplifies deployment § Accelerated time to market with a turnkey solution that includes a rich development toolset and that is optimized for crucial software libraries § The ability to run analytics on data where it is stored BigDL Application Scenario § Analyze large amounts of data on big data Spark clusters that store data, such as HDFS, Apache HBase, or Hive § Add deep learning capabilities (training or inference) to big data (Spark) programs or workflows § Run deep learning applications with existing Hadoop/Spark clusters and then easily share them with other workloads (eg extract-convert-load, data warehousing, feature design, classic machine learning, graphical analysis)
Offering
In the context of artificial intelligence and machine learning, training is the stage where neural networks try to learn from data. Reasoning puts learning into practice, and trained models are used to infer and predict outcomes—classify, identify, and process new input data based on what you have learned. The solution is a deep learning inference solution for some of the fastest-growing areas of artificial intelligence, such as video, natural language processing, and image processing. As a “turnkey platform” solution, with proven IA building blocks, partners can innovate and bring this integrated tool’s building block solution to market, making AI simple and efficient. Given that neural network models have been trained, model inference will be a challenge. This solution provides a starting point for customers to deploy efficient artificial intelligence inference algorithms. Use OpenVINO to accelerate inference, which shortens the time from enterprise data to strategic decisions, provides low latency and high-end to end throughput, and reduces enterprise costs.
Offering
VMware According to data from VMware’s official website, starting from vSAN 6.6, the standard version has supported all-flash, and its advanced version has more decompression and erasure coding functions than the standard version, but only if vSAN has an all-flash architecture. The deduplication and compression process on any storage platform incurs overhead and can impact latency and maximum IOPS performance. But the increased storage space efficiency brought by deduplication and compression reduces the cost per GB of storage capacity available in all-flash configurations. Therefore, users can not only obtain more optimized storage efficiency and higher storage quality if they build an all-flash vSAN architecture. Inspur NF5280M5 server is a 2U 2-way rack designed by Inspur for the company’s needs for the Internet, IDC (Internet Data Center), cloud computing, enterprise market, and telecommunications business applications, based on a new generation of Intel® Xeon® Scalable processors server. This product meets the requirements of more services for high network bandwidth, high computing performance, and large memory capacity. At the same time, it has a good solution for customers who have higher density and computing performance and have certain storage requirements. Intel® Optane ™ SSD Drive DC P4800X is the first product to combine memory and storage attributes. This innovative solution provides industry-leading high throughput, low latency, high quality of service and ultra-durability. The data storage layer is optimized to break data access bottlenecks. Intel® Optane ™ SSD DC P4800X accelerates applications, supports fast caching and fast storage, improves scalability per server, and reduces transaction costs for latency-sensitive workloads. In addition, the Intel® Optane ™ SSD DC P4800X enables data centers to deploy larger and more cost-effective data sets and gain new insights into large memory pools. Inspur NF5280M5 vSAN ReadyNode ™ based on VMware vSAN * Intel® Select Solution, choose Inspur NF5280M5, a next-generation server based on Intel® Xeon® Scalable processors, with excellent performance while providing a stable and efficient hardware foundation; choose Intel ® Optane ™ SSD DC P4800X as the cache layer Cache in the vSAN architecture to achieve all write buffer operations. Intel® Optane ™ SSD DC P4800X as a SSD with a delay close to DRAM has high throughput, low latency, High quality of service and ultra-high endurance can be used for high-efficiency large-volume writes and high-load transactions. For the data layer, choose the Intel® SSD DC P4500 SSD with larger capacity, excellent transmission performance, and higher durability.