Innovation for a Data-Driven Future
As customers contend with more data than ever, they need high-performance infrastructure and cloud services that can scale compute, network, and storage capacity to accommodate massive data sets. Intel has an extensive collaboration with Google and a diverse portfolio of optimized hardware. This makes running a wide range of workloads—from cloud to edge—fast, easy, and scalable for customers.
Combining Forces – A History of Innovation
Intel and Google have hardware and software collaboration roots that run deep. This collaboration is an extension of a technology alliance between the two companies that spans infrastructure optimizations, high-growth workloads like artificial intelligence, and the integration of new technologies into the Google Cloud Platform*, such as 2nd Generation Intel® Xeon® Scalable processors. Google Cloud was the first cloud service provider to offer Intel® Xeon® Scalable processors.
Powerful Foundation with Intel® Xeon® Scalable Processors
2nd Generation Intel® Xeon® Scalable processors provide a powerful foundation for Google Cloud’s compute-optimized C2, memory-optimized M2, and general-purpose N2 instances. These support a variety of workloads, from extreme gaming to SAP HANA*.
Faster Platform Performance with C2 Instances
WP Engine powers more WordPress websites than all other managed
WordPress hosts combined. Using C2 instances on Google Cloud Platform, WP
Engine achieved 60 percent faster platform performance. With “Live
Migration” instances, virtual machines (VMs) or Kubernetes containers can be
run with zero planned downtime, even when performing system maintenance
and microcode updates. For more information about these performance
results, click the link below.
Keeping Multisource Data Secure in the Cloud
Intel® Software Guard Extensions (Intel® SGX) helps protect multisource data by creating memory “enclaves” for each federated learning node. This hardware-enhanced security feature helps prevent attacks and improve security reliability.
Protecting Code and Data
Microsoft Azure confidential computing now features hardware-based security
with Intel® SGX, which extends the platform’s capability to protect customer's
data in use, in addition to data encryption at rest and in transit.
Fast, Affordable Persistent Memory with Intel® Optane™ Technology
Optimized for data-centric workloads, Intel® Optane™ persistent memory reduces the time it takes to fetch large data sets. With data retained even during power cycles, restart times are minimized while maintaining high capacity and bandwidth.
Drive efficient performance with Intel® Optane™ persistent memory ›
New Functionality with Reduced Cost
Kingsoft Cloud wanted to add in-memory database instances and capacity—without the prohibitive cost of DRAM. Using 2nd Generation Intel® Xeon® Scalable processors with Intel® Optane™ persistent memory, the CSP reduced costs while offering new cloud solutions at a price its customers could afford.
Cloud Instances Optimized for SAP HANA
Cloud computing and in-memory analytics are experiencing high levels of growth. Now, instances equipped with SAP-certified 2nd Gen Intel® Xeon® Scalable processors and Intel® Optane™ persistent memory can scale to 9 TB of memory capacity per server.
Intel® Optane™ persistent memory solves scalability and speed challenges ›
AI Made Simple, Accessible, and Fast
Joint solutions based on Google and Intel® technologies can enable high performance computing, big data computing, analytics, and AI. Google Cloud customers can use Intel® Performance Libraries to enhance the performance of Intel® processors and have access to Intel® Xeon® Scalable-optimized images for TensorFlow. Intel optimizations enhance Google Cloud deep learning, with more performance per core.
Learn how to enhance AI performance with Intel® technology ›
Fast Compression of Huge Data Volumes
By upgrading to Intel® Xeon® Scalable processors on the Google Cloud Platform, Descartes Labs can accelerate its compression of petabytes of data, helping drive down storage costs.
Optimized Instructions Triple Performance
Intel® Vector Neural Network Instructions (Intel® VNNI), available with Intel® Xeon® Scalable processors, improve AI performance by combining three instructions into one. The result: efficient use of compute resources, better cache usage, and fewer bandwidth bottlenecks.
See how Dell EMC* achieved a 3x improvement in performance ›
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.
Optimizing Machine Learning Pipelines for GCP
Datatonic created a set of best practices for choosing the best hardware to run TensorFlow-based applications on Google Cloud Platform. Based on their benchmarking tests, with results averaged over 100 runs, Intel® Xeon® Scalable processors were the most cost-effective compute solution for TensorFlow applications.
Notices and Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors.
Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information, visit www.intel.com/benchmarks.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Intel® technologies may require enabled hardware, software, or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.