federated learning

Federated Learning: Protecting Data at the Source


  • Federated learning is a distributed machine learning approach that enables organizational collaboration without exposing sensitive data.

  • Intel Labs worked with the University of Pennsylvania to develop technology that enabled the federation of 71 medical institutions working to identify brain tumors.

  • Learn how Intel Labs leverages Intel® Software Guard Extensions, OpenFL, Gramine, and the Intel® Distribution for OpenVINO™ toolkit to aid federated learning efforts.



Artificial intelligence (AI) improves through training over vast data sets. Typically, this means sharing and then centralizing those data sets in one location, but this becomes a concern when the training involves sensitive data. Federated learning (FL) is a distributed machine learning (ML) approach that enables organizational collaboration without exposing sensitive data or ML algorithms. Research in federated learning explores securely connecting multiple systems and datasets and removing the barriers preventing the aggregation of data for analysis. Industries such as retail, manufacturing, healthcare, and financial services can benefit from federated learning to gain valuable insights from data.

Intel Labs participates in many endeavors to encourage collaboration and bring world-changing concepts from ideation to production. Often times, partnerships with academic institutions have the power to transform research methodology into real-world applications that have the potential to revolutionize industries and even save lives. In one such collaboration, we provided the Center for Biomedical Image Computing and Analytics at the University of Pennsylvania (UPenn) Perlman with Intel processor-based servers to assist the Federated Tumor Segmentation (FeTS) initiative. Together, we are co-developing technology to train artificial intelligence models to identify brain tumors.

Early detection of brain tumors can reduce the impact of surgery and treatment, improving the prognosis for many patients. But as healthcare shifts from reactive to proactive scanning, the number of skilled technicians cannot keep up with the number of brain scans generated. To protect information while still leveraging ML models to automate scan analysis, Intel Labs and UPenn used data from over 71 medical institutions to apply and test the efficacy of federated learning for brain tumor edge detection. With federated learning powered by hardware and software from Intel, sensitive data can be secured at its source, while the AI model still benefits from a larger data set.

In 2018, Penn Medicine and Intel Labs presented preliminary results on federated learning in the medical imaging domain, demonstrating that FL could train a model with over 99% of the accuracy of a model trained in the traditional, non-private method. Over several years, the project has demonstrated FL’s influence in revolutionizing healthcare as well as the value of running FL on Intel technology. Updated results have shown increased performance as well, including:


  • More than double the number of participating healthcare and research sites across six continents compared to 2020.
  • The largest and most diverse dataset of glioblastoma patients ever considered in the literature (5 TB of data from 6,314 glioblastoma patients).
  • Up to 33% more accurate brain tumor detection compared to models trained on publicly available datasets.
  • Up to 4.48x lower latency and 2.29x lower memory utilization, compared to the first consensus model, resulting from model optimization using the Intel® Distribution of OpenVINO™ toolkit3—enabling the consensus model to run on edge devices in clinics.

Read the case study

Many technical components combined to bring about these results. Working in tandem, Intel hardware and software provide additional privacy to the model and data while taking advantage of pre-packaged tools and libraries to create an open, secure, and optimized system that is easy to deploy. Our work in federated learning relies heavily on four key pillars. The first pillar is Intel® Software Guard Extensions (SGX) which, leveraged by our OpenFL framework, enforces federation rules and prevents data exposure. Then, to simplify implementation, the program utilizes Gramine, an open-source library OS, and finally, once trained, the Intel® Distribution for OpenVINO™ toolkit works to optimize the AI model. Read on to learn more about each of these vital elements.

Intel® Software Guard Extensions (Intel® SGX)

Over a decade ago, Intel Labs began groundbreaking research into how data—and the code that uses the data—could be protected while an application is running. The result was Intel SGX, now a mature, enterprise-grade technology available with 3rd Generation Intel® Xeon® Scalable processors. Intel SGX allows organizations to isolate the software and data from the underlying infrastructure (hardware or OS) by using hardware-level encryption, which creates secure enclaves. In addition to helping defend against the myriad of more common software-based attacks, Intel SGX’s attestation mechanisms can verify that an application has not been compromised and that the processor it is running on has the latest security updates.


OpenFL is an open-source, Python 3-based framework for FL that is designed to be an easy-to-use, secure, scalable and extensible tool for data scientists. OpenFL is available on GitHub, along with tutorials and documentation to help organizations get started with their own FL projects. OpenFL is designed to be compatible with any ML or deep learning (DL) framework, and has tutorials and multiple examples using TensorFlow, PyTorch and MXNet. OpenFL combines hardware and software to enable privacy-preserving AI using Intel SGX and Gramine (more on Gramine in the next section). As of OpenFL v.1.3, the framework can run within an Intel SGX enclave. OpenFL helps institutions to collaborate and run their models in a federated manner while helping to improve the protection of sensitive information with the help of Intel SGX and Gramine.

Gramine Project

The Gramine Project is now a Confidential Computing Consortium project, of which Intel is a founding member. The Gramine Project provides tools and infrastructure components for running unmodified applications on confidential computing platforms based on Intel SGX. Gramine fast-tracks secure deployment of complex software stacks within Intel SGX by eliminating additional developer effort. It also provides tools for developing end-to-end secure solutions with Intel SGX enclaves that shield proprietary code and sensitive data from hackers, whether the data is in a state of use, in transit or at rest. Using Gramine, organizations seeking additional privacy protection for their FL projects can use Intel SGX more easily. Intel SGX is the most researched, updated and deployed hardware-based trusted execution environment (TEE) for the data center, and Gramine is one of the few frameworks that supports multi-process applications by providing a complete and secure fork implementation.

Intel® Distribution of OpenVINO™ toolkit

To further facilitate use in low-resource environments, we provide a post-training runtime optimized version of the final consensus model. The Intel Distribution of OpenVINO toolkit includes a Model Optimizer, which is a cross-platform command-line tool that facilitates the transition between training and deployment environments, performs static model analysis, and (DL) models for optimal execution on endpoint target devices. Optimizations include reducing the model’s size (such as the number of parameters and layers) and speeding models up. Using the Model Optimizer results in an Intermediate Representation (IR), which then passes to the Inference Engine, where the model undergoes further optimizations based on the target end-device. For the Penn Medicine project, using the Intel Distribution of OpenVINO toolkit resulted in up to 4.48x lower latency and 2.29x lower memory utilization, compared to the first consensus model created in 2020. With lower latency and memory utilization, this optimized model could run on edge systems at clinics instead of requiring large data center resources.