Federated learning (FL) is a machine learning technique where data scientists collaboratively train a model orchestrated by a central server.
OpenFL is a Python* library for federated learning that enables collaboration on machine learning projects without sharing sensitive or private data. In federated learning, the model moves to meet the data rather than the data moving to meet the model. “Collaborator” and “Aggregator” are new entities in a classical data science pipeline that federated learning brings in. For a real-world example, read this case study where Intel Labs collaborated with 71 international healthcare and research institutions to train AI models to identify brain tumors.
The sixth major release of the OpenFL 1.5 library introduces exciting new features and capabilities.
First, try the new experimental interface inspired by Metaflow*. You can now create custom aggregator and collaborator tasks and have better control over what (and how) gets sent over the federated network. This interface enables new use cases that you might find interesting and useful: Vertical Federated Learning, Differential Privacy, Federated Model Watermarking and more. We hope you'll enjoy using this new interface and look forward to hearing about what you do with it.
A team from VMware* made a very important contribution to OpenFL called the EDEN Compression pipeline, based on their research in federated learning. This contribution helps OpenFL to be more scalable by broadening the environments the framework can operate while simultaneously reducing network traffic across nodes.
Do you use Flax*, a neural network library for JAX designed for flexibility? OpenFL now supports it and you can see how it works by trying out a tutorial here. OpenFL is designed to be flexible and extensible, and already supports many popular machine learning frameworks, including Tensorflow*, PyTorch*, MXNet*. If you want to choose your framework and start playing with federated learning, there are tutorials available here. You can reach out for help if you need it on our Slack* channel.
OpenFL integrates with different machine learning software as well as with hardware. This latest release provides highly anticipated support of the Habana* accelerator. You can try it on Amazon Web Services* (AWS). Check out the documentation, and provide feedback on GitHub*. OpenFL, hosted by Intel, aims to be community driven and welcomes contributions back to the project.
You can see a full list of everything that’s new in 1.5 here.
About the Author
Olga Perepelkina, AI Product Manager at Intel
She holds a PhD in Neuroscience and a postgraduate degree in machine learning/data science. She's also an industrial adviser at the School of Doctoral Training at the University of Glasgow. Find her on LinkedIn.