When I first started in product management at Intel, I had the privilege to work with the Open Compute Project soon after it was founded in 2011 with the goal of bringing the collaborative possibilities of open source software to hardware infrastructure in data centers. Since then, I’ve worked on several programs within Intel that are tied to the OCP, and I’m humbled by how much the project has impacted the larger data center community. In fact, a recent market assessment predicts OCP’s impact will surpass $10 billion by 2022.
I’m a big believer in open standards: open encourages competition, and thus stimulates innovation. Standards help provide the foundation on which innovation can flourish.By setting hardware design standards, the OCP gives companies the ability to innovate faster, knowing that the same specifications are being used by all. As the field of Artificial Intelligence (AI) evolves, setting unified standards so that AI researchers are not locked into a single proprietary product is all the more important.
At this year’s OCP summit, I spoke about why we’re at an inflection point of AI evolution. To some extent, advances in AI have been limited by the types of hardware available to data scientists and researchers. However, manufacturers are working to make general purpose hardware (CPUs, GPUs, FPGAs) more performant for AI while building new, purpose-built AI chips. This will bring about the next era of AI training, when massive clusters built specifically for AI can be deployed to solve more complex challenges, powered by new kinds of compute.
Developing new compute capabilities and tools that push the boundaries of what’s possible is my passion, which is why I’m so excited to work on our Intel®️ Nervana™ Neural Network processors (Intel® Nervana™ NNP) for training. This new processor is designed to free organizations from the limitations imposed by existing hardware that isn’t explicitly designed for AI. With on-chip memory directly managed by software and no standard cache hierarchy, massive amounts of compute are available on each die. It can accommodate massive bi-directional data transfer with high-speed on- and off-chip interconnects, allowing multiple chips to act as one large virtual chip that supports larger models.In coordination with major cloud service providers and key enterprise customers, we’re enabling applications that have previously been restricted by scalability limitations and excessive costs. In short, the Intel Nervana NNP for training will give developers a new machine to do amazing things with deep learning.
At the OCP summit in March, we demoed our new Intel Nervana NNP mezzanine card based on OCP Accelerator Module (OAM) specifications announced at the event. Because we have developed this new AI hardware using the standards set by OCP for the larger ecosystem, organizations can feel confident in investing in that infrastructure now, knowing that they’ll have the option to tailor solutions as new technologies become available. Like other OCP initiatives, the OAM design specification will create a standard that everyone can innovate on – giving our customers more choices when selecting the best platform for their workloads.
Enabling the Ecosystem
Intel is one of the leading contributors to the development of OCP standards for AI hardware, much like how we contribute to the open source software community with Intel-optimized TensorFlow, Pytorch and other frameworks and Intel’s open-source nGraph library for deep learning. Intel is delivering hardware and software stacks that support these standards while giving customers the ability to make modifications as their AI needs change.
I’m very excited to see how OCP will continue to help advance and democratize AI. And I can’t wait for the larger data center community to see all of the products Intel will be releasing in the coming months and years that will address some of the most pressing issues in the field. We’re all in this together to create a new category of compute which can unlock new discoveries and ultimately transform the way we live and work. To stay updated on all of Intel’s news, read more on intel.ai and follow @IntelAI and @IntelAIResearch on Twitter.
Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel, the Intel logo, and Intel Nervana are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation