The Convergence of Cybersecurity and Artificial Intelligence

Published: 08/11/2018  

Last Updated: 08/10/2018

Protecting data for all the services critical to us in the digital world is of vital importance. Developers now face new opportunities and challenges presented by the widespread emergence of AI systems and algorithms. In this new era, Intel leads the data protection effort across the developer ecosystem with innovative technologies and solutions based on a silicon root of trust.

Hardware Security for Black Hat Conference

Cybersecurity threats have been with us since the dawn of the Internet. According to a 2015 article in The Washington Post*, Robert Metcalfe, who would later found 3Com, warned the ARPANET Working Group in 1973 that it was far too easy to gain access to the network—he described intrusions that were apparently the work of high school students.

By 1996, new animation tools such as Flash* revolutionized web sites, but hackers found the tools also let them take remote control of computers on the Internet. Today, AI poses even bigger opportunities and challenges. According to a July 26, 2018 DARK READING article, “AI is revolutionizing cybersecurity for both defenders and attackers as hackers, armed with the same weaponized technology, create a seemingly never-ending arms race.”

Everything Changes and Everything Stays the Same

There is a lot of talk about Zero Trust Networks as enterprise IT evolves to a cloud-first model. While some of these concepts are decades old, we have learned a lot over the years, and we are now building an ecosystem that allows deployment of newer models. Big changes are unlikely in the operational sense, as things change incrementally in enterprise IT. But attackers will stay ahead of us unless we step up and protect against future attacks, using the latest/greatest arsenal we have.

Intel® research and technology not only enables advanced, next generation usages with high performance AI, but does it with best practices in security and privacy, to ensure trustworthiness.

AI for Client Platform Security

IT departments are inundated with data, and they face huge challenges in keeping their businesses running while maximizing user experience and minimizing costs. By leveraging AI models, IT departments can intelligently recognize anomalies and proactively respond to them. Without such capabilities, tens or even hundreds of hours of valuable productivity would be lost.

Traditional models supporting device health dictated that data be extracted from client devices, shipped and analyzed in the cloud, and then reacted to through traditional patching tools. There are several challenges with this approach:

  • Such models are privacy-challenged as potentially sensitive end user data is shipped to the cloud
  • This creates a large attack surface for potential malicious actors
  • The volume of data is sufficiently large that only a fraction can be forwarded without significantly degrading device usability
  • These efforts are manually intensive as experienced engineers must pour through troves of data to find the signal that will enable them to find and address the root-cause of the problem
  • Remediation takes days, if not weeks, in this paradigm

By enabling intelligence on edge devices, Intel enables problems to be dealt with privately and securely, with little to no impact on user productivity.

As smart-connected devices become more ubiquitous and diverse, cyber threats proliferate and become increasingly sophisticated. The security industry is increasingly relying on AI-based solutions to solve many challenging security problems for which traditional signature- or rule-based solutions are ineffective.

Besides AI algorithms, AI solutions also depend on the quality of input data and the availability of compute power. In addition to offering hardware-based accelerators that help AI run more efficiently, Intel is working with security industry leaders to innovate AI-based solutions that use hardware telemetry and related capabilities to detect advanced threats that are undetectable by software telemetry alone. Accelerated Memory Scanning Technology from Intel is a notable example.

Adversarial Machine Learning

AI has been demonstrated to enhance security solutions as well as many other applications. But the value of AI in enhancing security is juxtaposed with the danger of AI in exploiting security vulnerabilities. That is the main focus of an emerging research area called adversarial machine learning. Carefully crafted adversarial examples, with minor perturbations to the input, may cause a classification system to misclassify. Often, such misclassification comes with high prediction-confidence from AI algorithms.

This threat can be worrisome—how much should people embed AI in their everyday lives? Understanding vulnerabilities and resiliency of machine learning—particularly deep learning algorithms—helps identify the AI attack surfaces and promotes new waves of analytically-secured AI frameworks. Intel is a pioneer in recognizing the importance of adversarial machine learning.

In 2016, Intel launched a three-year collaboration project with a $1.5 million grant through the Intel® Science and Technology Center (ISTC) to Georgia Institute of Technology* (Georgia Tech*), focusing on adversarial-resilient security analytics (ARSA). ISTC-ARSA has expanded adversarial machine learning research in domains such as malware analysis, computer vision and audio recognition. The cutting-edge technologies and methodologies developed by Intel and Georgia Tech collaboration result in multiple publications in top-tier security or machine learning conferences and summit.

Besides funding fundamental security research, Intel is working with leading security vendors to protect the integrity of their AI models through hardware-enhanced security capabilities. These include Intel® Software Guard Extensions (Intel® SGX) in our latest processors, and high-quality hardware telemetry data for improving the robustness of AI classifiers. These and other Intel hardware-enhanced security efforts will help to make AI more resilient and make it more difficult for adversarial attacks to succeed.

Building Up the Talent Pool

The attack surfaces identified on AI algorithms indicate a need for better-designed AI frameworks and continuing hardware innovation to accelerate them. The industry needs more researchers and engineers with in-depth knowledge in both AI and security to provide the next generation of innovation. Such expertise is in great demand.

Intel acknowledges its responsibility to build up this talent pool. Via Intel’s university collaboration investments, Intel supports students to continue research in this domain and promotes frameworks developed by academics to be used as training tools.

Related Content

Hardware-Enhanced Security from Intel. Enterprises can create much stronger security than with software alone and enjoy fast recovery cycles

Intel University Research Programs. Critical research paths explored by collaborative, university-based initiatives sponsored by Intel

Intel Accelerated Memory Scanning Eases Anti-Virus CPU Load. The Tech Report examines technology to reduce the scanning load on machines with newer Intel® CPUs

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at